1
|
Bellemo V, Lim G, Rim TH, Tan GSW, Cheung CY, Sadda S, He MG, Tufail A, Lee ML, Hsu W, Ting DSW. Artificial Intelligence Screening for Diabetic Retinopathy: the Real-World Emerging Application. Curr Diab Rep 2019; 19:72. [PMID: 31367962 DOI: 10.1007/s11892-019-1189-3] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
PURPOSE OF REVIEW This paper systematically reviews the recent progress in diabetic retinopathy screening. It provides an integrated overview of the current state of knowledge of emerging techniques using artificial intelligence integration in national screening programs around the world. Existing methodological approaches and research insights are evaluated. An understanding of existing gaps and future directions is created. RECENT FINDINGS Over the past decades, artificial intelligence has emerged into the scientific consciousness with breakthroughs that are sparking increasing interest among computer science and medical communities. Specifically, machine learning and deep learning (a subtype of machine learning) applications of artificial intelligence are spreading into areas that previously were thought to be only the purview of humans, and a number of applications in ophthalmology field have been explored. Multiple studies all around the world have demonstrated that such systems can behave on par with clinical experts with robust diagnostic performance in diabetic retinopathy diagnosis. However, only few tools have been evaluated in clinical prospective studies. Given the rapid and impressive progress of artificial intelligence technologies, the implementation of deep learning systems into routinely practiced diabetic retinopathy screening could represent a cost-effective alternative to help reduce the incidence of preventable blindness around the world.
Collapse
Affiliation(s)
- Valentina Bellemo
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Gilbert Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - SriniVas Sadda
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | - Ming-Guang He
- Center of Eye Research Australia, Melbourne, Victoria, Australia
| | - Adnan Tufail
- Moorfields Eye Hospital & Institute of Ophthalmology, UCL, London, UK
| | - Mong Li Lee
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Wynne Hsu
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
2
|
Besenczi R, Tóth J, Hajdu A. A review on automatic analysis techniques for color fundus photographs. Comput Struct Biotechnol J 2016; 14:371-384. [PMID: 27800125 PMCID: PMC5072151 DOI: 10.1016/j.csbj.2016.10.001] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 10/01/2016] [Accepted: 10/03/2016] [Indexed: 12/25/2022] Open
Abstract
In this paper, we give a review on automatic image processing tools to recognize diseases causing specific distortions in the human retina. After a brief summary of the biology of the retina, we give an overview of the types of lesions that may appear as biomarkers of both eye and non-eye diseases. We present several state-of-the-art procedures to extract the anatomic components and lesions in color fundus photographs and decision support methods to help clinical diagnosis. We list publicly available databases and appropriate measurement techniques to compare quantitatively the performance of these approaches. Furthermore, we discuss on how the performance of image processing-based systems can be improved by fusing the output of individual detector algorithms. Retinal image analysis using mobile phones is also addressed as an expected future trend in this field.
Collapse
Key Words
- ACC, accuracy
- AMD, age-related macular degeneration
- AUC, area under the receiver operator characteristics curve
- Biomedical imaging
- Clinical decision support
- DR, diabetic retinopathy
- FN, false negative
- FOV, field-of-view
- FP, false positive
- FPI, false positive per image
- Fundus image analysis
- MA, microaneurysm
- NA, not available
- OC, optic cup
- OD, optic disc
- PPV, positive predictive value (precision)
- ROC, Retinopathy Online Challenge
- RS, Retinopathy Online Challenge score
- Retinal diseases
- SCC, Spearman's rank correlation coefficient
- SE, sensitivity
- SP, specificity
- TN, true negative
- TP, true positive
- kNN, k-nearest neighbor
Collapse
Affiliation(s)
- Renátó Besenczi
- Faculty of Informatics, University of Debrecen 4002 Debrecen PO Box 400, Hungary
| | - János Tóth
- Faculty of Informatics, University of Debrecen 4002 Debrecen PO Box 400, Hungary
| | - András Hajdu
- Faculty of Informatics, University of Debrecen 4002 Debrecen PO Box 400, Hungary
| |
Collapse
|
3
|
van Grinsven MJJP, Theelen T, Witkamp L, van der Heijden J, van de Ven JPH, Hoyng CB, van Ginneken B, Sánchez CI. Automatic differentiation of color fundus images containing drusen or exudates using a contextual spatial pyramid approach. BIOMEDICAL OPTICS EXPRESS 2016; 7:709-25. [PMID: 27231583 PMCID: PMC4866450 DOI: 10.1364/boe.7.000709] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Revised: 01/22/2016] [Accepted: 01/23/2016] [Indexed: 05/11/2023]
Abstract
We developed an automatic system to identify and differentiate color fundus images containing no lesions, drusen or exudates. Drusen and exudates are lesions with a bright appearance, associated with age-related macular degeneration and diabetic retinopathy, respectively. The system consists of three lesion detectors operating at pixel-level, combining their outputs using spatial pooling and classification with a random forest classifier. System performance was compared with ratings of two independent human observers using human-expert annotations as reference. Kappa agreements of 0.89, 0.97 and 0.92 and accuracies of 0.93, 0.98 and 0.95 were obtained for the system and observers, respectively.
Collapse
Affiliation(s)
- Mark J. J. P. van Grinsven
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Thomas Theelen
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The
Netherlands
| | | | | | | | - Carel B. Hoyng
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Clara I. Sánchez
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The
Netherlands
| |
Collapse
|
4
|
De Zanet SI, Ciller C, Rudolph T, Maeder P, Munier F, Balmer A, Cuadra MB, Kowal JH. Landmark detection for fusion of fundus and MRI toward a patient-specific multimodal eye model. IEEE Trans Biomed Eng 2014; 62:532-40. [PMID: 25265602 DOI: 10.1109/tbme.2014.2359676] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise, e.g., Fundus photography, optical coherence tomography, computed tomography, and magnetic resonance imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The goal of this paper is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI that was not visible before like vessels and the macula. This paper contributions include automatic detection of the optic disc, the fovea, the optic axis, and an automatic segmentation of the vitreous humor of the eye.
Collapse
|
5
|
Abràmoff M, Kay CN. Image Processing. Retina 2013. [DOI: 10.1016/b978-1-4557-0737-9.00006-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
6
|
Optic disc detection in color fundus images using ant colony optimization. Med Biol Eng Comput 2012; 51:295-303. [DOI: 10.1007/s11517-012-0994-5] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2012] [Accepted: 11/01/2012] [Indexed: 10/27/2022]
|
7
|
Detecting optic disc on asians by multiscale gaussian filtering. Int J Biomed Imaging 2012; 2012:727154. [PMID: 22844267 PMCID: PMC3392893 DOI: 10.1155/2012/727154] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2012] [Revised: 04/23/2012] [Accepted: 04/23/2012] [Indexed: 11/18/2022] Open
Abstract
The optic disc (OD) is an important anatomical feature in retinal images, and its detection is vital for developing automated screening programs. Currently, there is no algorithm designed to automatically detect the OD in fundus images captured from Asians which are larger and have thicker vessels compared to Caucasians. In this paper, we propose such a method to complement current algorithms using two steps: OD vessel candidate detection and OD vessel candidate matching. The first step is achieved with multiscale Gaussian filtering, scale production, and double thresholding to initially extract the vessels' directional map of various thicknesses. The map is then thinned before another threshold is applied to remove pixels with low intensities. This result forms the OD vessel candidates. In the second step, a Vessels' Directional Matched Filter (VDMF) of various dimensions is applied to the candidates to be matched, and the pixel with the smallest difference designated the OD center. We tested the proposed method on a new database consisting of 402 images from a diabetic retinopathy (DR) screening programme consisting of Asians. The OD center was successfully detected with an accuracy of 99.25% (399/402).
Collapse
|
8
|
Karnowski TP, Aykac D, Giancardo L, Li Y, Nichols T, Tobin KW, Chaum E. Automatic detection of retina disease: robustness to image quality and localization of anatomy structure. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2011:5959-64. [PMID: 22255697 DOI: 10.1109/iembs.2011.6091473] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The automated detection of diabetic retinopathy and other eye diseases in images of the retina has great promise as a low-cost method for broad-based screening. Many systems in the literature which perform automated detection include a quality estimation step and physiological feature detection, including the vascular tree and the optic nerve / macula location. In this work, we study the robustness of an automated disease detection method with respect to the accuracy of the optic nerve location and the quality of the images obtained as judged by a quality estimation algorithm. The detection algorithm features microaneurysm and exudate detection followed by feature extraction on the detected population to describe the overall retina image. Labeled images of retinas ground-truthed to disease states are used to train a supervised learning algorithm to identify the disease state of the retina image and exam set. Under the restrictions of high confidence optic nerve detections and good quality imagery, the system achieves a sensitivity and specificity of 94.8% and 78.7% with area-under-curve of 95.3%. Analysis of the effect of constraining quality and the distinction between mild non-proliferative diabetic retinopathy, normal retina images, and more severe disease states is included.
Collapse
Affiliation(s)
- T P Karnowski
- Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
| | | | | | | | | | | | | |
Collapse
|
9
|
Huang Y, Zhang J, Huang Y. An automated computational framework for retinal vascular network labeling and branching order analysis. Microvasc Res 2012; 84:169-77. [PMID: 22626949 DOI: 10.1016/j.mvr.2012.05.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 05/11/2012] [Accepted: 05/15/2012] [Indexed: 10/28/2022]
Abstract
Changes in retinal vascular morphology are well known as predictive clinical signs of many diseases such as hypertension, diabetes and so on. Computer-aid image processing and analysis for retinal vessels in fundus images are effective and efficient in clinical diagnosis instead of tedious manual labeling and measurement. An automated computational framework for retinal vascular network labeling and analysis is presented in this work. The framework includes 1) detecting and locating the optic disc; 2) tracking the vessel centerline from detected seed points and linking the breaks after tracing; 3) extracting all the retinal vascular trees and identifying all the significant points; and 4) classifying terminal points into starting points and ending points based on the information of optic disc location, and finally assigning branch order for each extracted vascular tree in the image. All the modules in the framework are fully automated. Based on the results, morphological analysis is then applied to achieve geometrical and topological features based on branching order for one individual vascular tree or for the vascular network through the retinal vascular network in the images. Validation and experiments on the public DRIVE database have demonstrated that the proposed framework is a novel approach to analyze and study the vascular network pattern, and may offer new insights to the diagnosis of retinopathy.
Collapse
|
10
|
Lu S. Accurate and efficient optic disc detection and segmentation by a circular transformation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:2126-33. [PMID: 21843983 DOI: 10.1109/tmi.2011.2164261] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Under the framework of computer-aided diagnosis, this paper presents an accurate and efficient optic disc (OD) detection and segmentation technique. A circular transformation is designed to capture both the circular shape of the OD and the image variation across the OD boundary simultaneously. For each retinal image pixel, it evaluates the image variation along multiple evenly-oriented radial line segments of specific length. The pixels with the maximum variation along all radial line segments are determined, which can be further exploited to locate both the OD center and the OD boundary accurately. Experiments show that OD detection accuracies of 99.75%, 97.5%, and 98.77% are obtained for the STARE dataset, the ARIA dataset, and the MESSIDOR dataset, respectively, and the OD center error lies around six pixels for the STARE dataset and the ARIA dataset which is much smaller than that of state-of-the-art methods ranging 14-29 pixels. In addition, the OD segmentation accuracies of 93.4% and 91.7% are obtained for STARE dataset and ARIA dataset, respectively, that consists of many severely degraded images of pathological retinas that state-of-the-art methods cannot segment properly. Furthermore, the algorithm runs in 5 s, which is substantially faster than many of the state-of-the-art methods.
Collapse
Affiliation(s)
- Shijian Lu
- Institute for Infocomm Research, A*STAR, 138632 Singapore.
| |
Collapse
|
11
|
Xu X, Niemeijer M, Song Q, Sonka M, Garvin MK, Reinhardt JM, Abràmoff MD. Vessel boundary delineation on fundus images using graph-based approach. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:1184-91. [PMID: 21216707 PMCID: PMC3137950 DOI: 10.1109/tmi.2010.2103566] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
This paper proposes an algorithm to measure the width of retinal vessels in fundus photographs using graph-based algorithm to segment both vessel edges simultaneously. First, the simultaneous two-boundary segmentation problem is modeled as a two-slice, 3-D surface segmentation problem, which is further converted into the problem of computing a minimum closed set in a node-weighted graph. An initial segmentation is generated from a vessel probability image. We use the REVIEW database to evaluate diameter measurement performance. The algorithm is robust and estimates the vessel width with subpixel accuracy. The method is used to explore the relationship between the average vessel width and the distance from the optic disc in 600 subjects.
Collapse
Affiliation(s)
- Xiayu Xu
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242 USA
| | - Meindert Niemeijer
- Departments of Ophthalmology and Visual Sciences and Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 USA, and also with the Veteran’s Administration Medical Center, Iowa City, IA 52242 USA
| | - Qi Song
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 USA
| | - Milan Sonka
- Departments of Ophthalmology and Visual Sciences and Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 USA, and also with the Veteran’s Administration Medical Center, Iowa City, IA 52242 USA
| | - Mona K. Garvin
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 USA
| | - Joseph M. Reinhardt
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242 USA
| | - Michael D. Abràmoff
- Departments of Ophthalmology and Visual Sciences, Biomedical Engineering, and Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 USA, and also with the Veteran’s Administration Medical Center, Iowa City, IA 52242 USA
| |
Collapse
|
12
|
Abràmoff MD, Reinhardt JM, Russell SR, Folk JC, Mahajan VB, Niemeijer M, Quellec G. Automated early detection of diabetic retinopathy. Ophthalmology 2010; 117:1147-54. [PMID: 20399502 PMCID: PMC2881172 DOI: 10.1016/j.ophtha.2010.03.046] [Citation(s) in RCA: 102] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2009] [Revised: 03/18/2010] [Accepted: 03/19/2010] [Indexed: 11/16/2022] Open
Abstract
PURPOSE To compare the performance of automated diabetic retinopathy (DR) detection, using the algorithm that won the 2009 Retinopathy Online Challenge Competition in 2009, the Challenge2009, against that of the one currently used in EyeCheck, a large computer-aided early DR detection project. DESIGN Evaluation of diagnostic test or technology. PARTICIPANTS Fundus photographic sets, consisting of 2 fundus images from each eye, were evaluated from 16670 patient visits of 16,670 people with diabetes who had not previously been diagnosed with DR. METHODS The fundus photographic set from each visit was analyzed by a single retinal expert; 793 of the 16,670 sets were classified as containing more than minimal DR (threshold for referral). The outcomes of the 2 algorithmic detectors were applied separately to the dataset and were compared by standard statistical measures. MAIN OUTCOME MEASURES The area under the receiver operating characteristic curve (AUC), a measure of the sensitivity and specificity of DR detection. RESULTS Agreement was high, and examination results indicating more than minimal DR were detected with an AUC of 0.839 by the EyeCheck algorithm and an AUC of 0.821 for the Challenge2009 algorithm, a statistically nonsignificant difference (z-score, 1.91). If either of the algorithms detected DR in combination, the AUC for detection was 0.86, the same as the theoretically expected maximum. At 90% sensitivity, the specificity of the EyeCheck algorithm was 47.7% and that of the Challenge2009 algorithm was 43.6%. CONCLUSIONS Diabetic retinopathy detection algorithms seem to be maturing, and further improvements in detection performance cannot be differentiated from best clinical practices, because the performance of competitive algorithm development now has reached the human intrareader variability limit. Additional validation studies on larger, well-defined, but more diverse populations of patients with diabetes are needed urgently, anticipating cost-effective early detection of DR in millions of people with diabetes to triage those patients who need further care at a time when they have early rather than advanced DR.
Collapse
Affiliation(s)
- Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa 52242, USA
| | | | | | | | | | | | | |
Collapse
|
13
|
Niemeijer M, Abramoff MD, van Ginneken B. Information fusion for diabetic retinopathy CAD in digital color fundus photographs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2009; 28:775-785. [PMID: 19150786 DOI: 10.1109/tmi.2008.2012029] [Citation(s) in RCA: 58] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The purpose of computer-aided detection or diagnosis (CAD) technology has so far been to serve as a second reader. If, however, all relevant lesions in an image can be detected by CAD algorithms, use of CAD for automatic reading or prescreening may become feasible. This work addresses the question how to fuse information from multiple CAD algorithms, operating on multiple images that comprise an exam, to determine a likelihood that the exam is normal and would not require further inspection by human operators. We focus on retinal image screening for diabetic retinopathy, a common complication of diabetes. Current CAD systems are not designed to automatically evaluate complete exams consisting of multiple images for which several detection algorithm output sets are available. Information fusion will potentially play a crucial role in enabling the application of CAD technology to the automatic screening problem. Several different fusion methods are proposed and their effect on the performance of a complete comprehensive automatic diabetic retinopathy screening system is evaluated. Experiments show that the choice of fusion method can have a large impact on system performance. The complete system was evaluated on a set of 15,000 exams (60,000 images). The best performing fusion method obtained an area under the receiver operator characteristic curve of 0.881. This indicates that automated prescreening could be applied in diabetic retinopathy screening programs.
Collapse
Affiliation(s)
- Meindert Niemeijer
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 USA.
| | | | | |
Collapse
|
14
|
Niemeijer M, Abramoff MD, van Ginneken B. Automated localization of the optic disc and the fovea. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2009; 2008:3538-41. [PMID: 19163472 DOI: 10.1109/iembs.2008.4649969] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The detection of the position of the normal anatomy in color fundus photographs is an important step in the automated analysis of retinal images. An automatic system for the detection of the position of the optic disc and the fovea is presented. The method integrates the use of local vessel geometry and image intensity features to find the correct positions in the image. A kNN regressor is used to accomplish the integration. Evaluation was performed on a set of 250 digital color fundus photographs and the detection performance for the optic disc and the fovea were 99.2% and 96.4% respectively.
Collapse
Affiliation(s)
- M Niemeijer
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, USA.
| | | | | |
Collapse
|
15
|
Abràmoff MD, Niemeijer M, Suttorp-Schulten MSA, Viergever MA, Russell SR, van Ginneken B. Evaluation of a system for automatic detection of diabetic retinopathy from color fundus photographs in a large population of patients with diabetes. Diabetes Care 2008; 31:193-8. [PMID: 18024852 PMCID: PMC2494619 DOI: 10.2337/dc07-1312] [Citation(s) in RCA: 125] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
OBJECTIVE To evaluate the performance of a system for automated detection of diabetic retinopathy in digital retinal photographs, built from published algorithms, in a large, representative, screening population. RESEARCH DESIGN AND METHODS We conducted a retrospective analysis of 10,000 consecutive patient visits, specifically exams (four retinal photographs, two left and two right) from 5,692 unique patients from the EyeCheck diabetic retinopathy screening project imaged with three types of cameras at 10 centers. Inclusion criteria included no previous diagnosis of diabetic retinopathy, no previous visit to ophthalmologist for dilated eye exam, and both eyes photographed. One of three retinal specialists evaluated each exam as unacceptable quality, no referable retinopathy, or referable retinopathy. We then selected exams with sufficient image quality and determined presence or absence of referable retinopathy. Outcome measures included area under the receiver operating characteristic curve (number needed to miss one case [NNM]) and type of false negative. RESULTS Total area under the receiver operating characteristic curve was 0.84, and NNM was 80 at a sensitivity of 0.84 and a specificity of 0.64. At this point, 7,689 of 10,000 exams had sufficient image quality, 4,648 of 7,689 (60%) were true negatives, 59 of 7,689 (0.8%) were false negatives, 319 of 7,689 (4%) were true positives, and 2,581 of 7,689 (33%) were false positives. Twenty-seven percent of false negatives contained large hemorrhages and/or neovascularizations. CONCLUSIONS Automated detection of diabetic retinopathy using published algorithms cannot yet be recommended for clinical practice. However, performance is such that evaluation on validated, publicly available datasets should be pursued. If algorithms can be improved, such a system may in the future lead to improved prevention of blindness and vision loss in patients with diabetes.
Collapse
Affiliation(s)
- Michael D Abràmoff
- Retina Service, Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA.
| | | | | | | | | | | |
Collapse
|
16
|
Youssif AR, Ghalwash AZ, Ghoneim AR. Optic disc detection from normalized digital fundus images by means of a vessels' direction matched filter. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:11-18. [PMID: 18270057 DOI: 10.1109/tmi.2007.900326] [Citation(s) in RCA: 107] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Optic disc (OD) detection is a main step while developing automated screening systems for diabetic retinopathy. We present in this paper a method to automatically detect the position of the OD in digital retinal fundus images. The method starts by normalizing luminosity and contrast through out the image using illumination equalization and adaptive histogram equalization methods respectively. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Hence, a simple matched filter is proposed to roughly match the direction of the vessels at the OD vicinity. The retinal vessels are segmented using a simple and standard 2-D Gaussian matched filter. Consequently, a vessels direction map of the segmented retinal vessels is obtained using the same segmentation algorithm. The segmented vessels are then thinned, and filtered using local intensity, to represent finally the OD-center candidates. The difference between the proposed matched filter resized into four different sizes, and the vessels' directions at the surrounding area of each of the OD-center candidates is measured. The minimum difference provides an estimate of the OD-center coordinates. The proposed method was evaluated using a subset of the STARE project's dataset, containing 81 fundus images of both normal and diseased retinas, and initially used by literature OD detection methods. The OD-center was detected correctly in 80 out of the 81 images (98.77%). In addition, the OD-center was detected correctly in all of the 40 images (100%) using the publicly available DRIVE dataset.
Collapse
Affiliation(s)
- A R Youssif
- Department of Computer Science, Helwan University, Cairo, Egypt.
| | | | | |
Collapse
|
17
|
Tobin KW, Chaum E, Govindasamy VP, Karnowski TP. Detection of anatomic structures in human retinal imagery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:1729-39. [PMID: 18092741 DOI: 10.1109/tmi.2007.902801] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The widespread availability of electronic imaging devices throughout the medical community is leading to a growing body of research on image processing and analysis to diagnose retinal disease such as diabetic retinopathy (DR). Productive computer-based screening of large, at-risk populations at low cost requires robust, automated image analysis. In this paper we present results for the automatic detection of the optic nerve and localization of the macula using digital red-free fundus photography. Our method relies on the accurate segmentation of the vasculature of the retina followed by the determination of spatial features describing the density, average thickness, and average orientation of the vasculature in relation to the position of the optic nerve. Localization of the macula follows using knowledge of the optic nerve location to detect the horizontal raphe of the retina using a geometric model of the vasculature. We report 90.4% detection performance for the optic nerve and 92.5% localization performance for the macula for red-free fundus images representing a population of 345 images corresponding to 269 patients with 18 different pathologies associated with DR and other common retinal diseases such as age-related macular degeneration.
Collapse
Affiliation(s)
- Kenneth W Tobin
- Image Science and Machine Vision Group, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6010, USA.
| | | | | | | |
Collapse
|
18
|
D'Antoni R, De Giusti A. Model based retinal analysis for retinopathy detection. ACTA ACUST UNITED AC 2007; 2007:6732-5. [PMID: 18003572 DOI: 10.1109/iembs.2007.4353906] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper provides a novel combination of methods to preprocess and extract features from retinal images, in order to get structures such as the vessel tree and the optic disc, and diagnose retinopathy. A model based approach was developed, making extensive use of LBG vector quantization and elements provided by mathematical morphology as investigation tools. The proposed algorithm was tested against the DRIVE database hand-labeled ground-truth, scoring an overall 93.2% correspondence on vessel tree search algorithm, 81.3% on the optic disc localization, 85% specificity and 78% correspondence on the disease detection.
Collapse
|
19
|
Abràmoff MD, Alward WLM, Greenlee EC, Shuba L, Kim CY, Fingert JH, Kwon YH. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features. Invest Ophthalmol Vis Sci 2007; 48:1665-73. [PMID: 17389498 PMCID: PMC2739577 DOI: 10.1167/iovs.06-1081] [Citation(s) in RCA: 129] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE To evaluate a novel automated segmentation algorithm for cup-to-disc segmentation from stereo color photographs of patients with glaucoma for the measurement of glaucoma progression. METHODS Stereo color photographs of the optic disc were obtained by using a fixed stereo-base fundus camera in 58 eyes of 58 patients with suspected or open-angle glaucoma. Manual planimetry was performed by three glaucoma faculty members to delineate a reference standard rim and cup segmentation of all stereo pairs and by three glaucoma fellows as well. Pixel feature classification was evaluated on the stereo pairs and corresponding reference standard, by using feature computation based on simulation of photoreceptor color opponency and visual cortex simple and complex cells. An optimal subset of 12 features was used to segment all pixels in all stereo pairs, and the percentage of pixels assigned the correct class and linear cup-to-disc ratio (LCDR) estimates of the glaucoma fellows and the algorithm were compared to the reference standard. RESULTS The algorithm was able to assign cup, rim, and background correctly to 88% of all pixels. Correlations of the LCDR estimates of glaucoma fellows with those of the reference standard were 0.73 (95% CI, 0.58-0.83), 0.81 (95% CI, 0.70-0.89), and 0.86 (95% CI, 0.78-0.91), respectively, whereas the correlation of the algorithm with the reference standard was 0.93 (95% CI, 0.89-0.96; n = 58). CONCLUSIONS The pixel feature classification algorithm allows objective segmentation of the optic disc from conventional color stereo photographs automatically without human input. The performance of the disc segmentation and LCDR calculation of the algorithm was comparable to that of glaucoma fellows in training and is promising for objective evaluation of optic disc cupping.
Collapse
Affiliation(s)
- Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, Iowa City, IA 52242, USA.
| | | | | | | | | | | | | |
Collapse
|
20
|
Niemeijer M, van Ginneken B, Russell SR, Suttorp-Schulten MSA, Abràmoff MD. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis. Invest Ophthalmol Vis Sci 2007; 48:2260-7. [PMID: 17460289 PMCID: PMC2739583 DOI: 10.1167/iovs.06-0996] [Citation(s) in RCA: 153] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
PURPOSE To describe and evaluate a machine learning-based, automated system to detect exudates and cotton-wool spots in digital color fundus photographs and differentiate them from drusen, for early diagnosis of diabetic retinopathy. METHODS Three hundred retinal images from one eye of 300 patients with diabetes were selected from a diabetic retinopathy telediagnosis database (nonmydriatic camera, two-field photography): 100 with previously diagnosed bright lesions and 200 without. A machine learning computer program was developed that can identify and differentiate among drusen, (hard) exudates, and cotton-wool spots. A human expert standard for the 300 images was obtained by consensus annotation by two retinal specialists. Sensitivities and specificities of the annotations on the 300 images by the automated system and a third retinal specialist were determined. RESULTS The system achieved an area under the receiver operating characteristic (ROC) curve of 0.95 and sensitivity/specificity pairs of 0.95/0.88 for the detection of bright lesions of any type, and 0.95/0.86, 0.70/0.93, and 0.77/0.88 for the detection of exudates, cotton-wool spots, and drusen, respectively. The third retinal specialist achieved pairs of 0.95/0.74 for bright lesions and 0.90/0.98, 0.87/0.98, and 0.92/0.79 per lesion type. CONCLUSIONS A machine learning-based, automated system capable of detecting exudates and cotton-wool spots and differentiating them from drusen in color images obtained in community based diabetic patients has been developed and approaches the performance level of retinal experts. If the machine learning can be improved with additional training data sets, it may be useful for detecting clinically important bright lesions, enhancing early diagnosis, and reducing visual loss in patients with diabetes.
Collapse
Affiliation(s)
- Meindert Niemeijer
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | | | | | |
Collapse
|