1
|
König M, Seeböck P, Gerendas BS, Mylonas G, Winklhofer R, Dimakopoulou I, Schmidt-Erfurth UM. Quality assessment of colour fundus and fluorescein angiography images using deep learning. Br J Ophthalmol 2023; 108:98-104. [PMID: 36418144 PMCID: PMC10804038 DOI: 10.1136/bjo-2022-321963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 11/11/2022] [Indexed: 11/24/2022]
Abstract
BACKGROUND/AIMS Image quality assessment (IQA) is crucial for both reading centres in clinical studies and routine practice, as only adequate quality allows clinicians to correctly identify diseases and treat patients accordingly. Here we aim to develop a neural network for automated real-time IQA in colour fundus (CF) and fluorescein angiography (FA) images. METHODS Training and evaluation of two neural networks were conducted using 2272 CF and 2492 FA images, with binary labels in four (contrast, focus, illumination, shadow and reflection) and three (contrast, focus, noise) modality specific categories plus an overall quality ranking. Performance was compared with a second human grader, evaluated on an external public dataset and in a clinical trial use-case. RESULTS The networks achieved a F1-score/area under the receiving operator characteristic/precision recall curve of 0.907/0.963/0.966 for CF and 0.822/0.918/0.889 for FA in overall quality prediction with similar results in most categories. A clear relation between model uncertainty and prediction error was observed. In the clinical trial use-case evaluation, the networks achieved an accuracy of 0.930 for CF and 0.895 for FA. CONCLUSION The presented method allows automated IQA in real time, demonstrating human-level performance for CF as well as FA. Such models can help to overcome the problem of human intergrader and intragrader variability by providing objective and reproducible IQA results. It has particular relevance for real-time feedback in multicentre clinical studies, when images are uploaded to central reading centre portals. Moreover, automated IQA as preprocessing step can support integrating automated approaches into clinical practice.
Collapse
Affiliation(s)
- Michael König
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Philipp Seeböck
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Bianca S Gerendas
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Georgios Mylonas
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Rudolf Winklhofer
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | - Ioanna Dimakopoulou
- Department of Ophthalmology and Optometry, Medical University of Vienna, Wien, Austria
| | | |
Collapse
|
2
|
Wu J, Fang H, Li F, Fu H, Lin F, Li J, Huang Y, Yu Q, Song S, Xu X, Xu Y, Wang W, Wang L, Lu S, Li H, Huang S, Lu Z, Ou C, Wei X, Liu B, Kobbi R, Tang X, Lin L, Zhou Q, Hu Q, Bogunović H, Orlando JI, Zhang X, Xu Y. GAMMA challenge: Glaucoma grAding from Multi-Modality imAges. Med Image Anal 2023; 90:102938. [PMID: 37806020 DOI: 10.1016/j.media.2023.102938] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 07/04/2023] [Accepted: 08/16/2023] [Indexed: 10/10/2023]
Abstract
Glaucoma is a chronic neuro-degenerative condition that is one of the world's leading causes of irreversible but preventable blindness. The blindness is generally caused by the lack of timely detection and treatment. Early screening is thus essential for early treatment to preserve vision and maintain life quality. Colour fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both imaging modalities have prominent biomarkers to indicate glaucoma suspects, such as the vertical cup-to-disc ratio (vCDR) on fundus images and retinal nerve fiber layer (RNFL) thickness on OCT volume. In clinical practice, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes for the automated glaucoma detection, there are few methods that leverage both of the modalities to achieve the target. To fulfil the research gap, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus & OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus colour photography and 3D OCT volumes, which is the first multi-modality dataset for machine learning based glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, ten best performing teams were selected for the final stage. We analyse their results and summarize their methods in the paper. Since all the teams submitted their source code in the challenge, we conducted a detailed ablation study to verify the effectiveness of the particular modules proposed. Finally, we identify the proposed techniques and strategies that could be of practical value for the clinical diagnosis of glaucoma. As the first in-depth study of fundus & OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will serve as an essential guideline and benchmark for future research.
Collapse
Affiliation(s)
- Junde Wu
- South China University of Technology, Guangzhou, China; Pazhou Lab, Guangzhou, China
| | - Huihui Fang
- South China University of Technology, Guangzhou, China; Pazhou Lab, Guangzhou, China
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Huazhu Fu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore
| | - Fengbin Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jiongcheng Li
- School of Informatics, Xiamen University, Xiamen, China
| | - Yue Huang
- School of Informatics, Xiamen University, Xiamen, China
| | - Qinji Yu
- Shanghai Jiao Tong University, Shanghai, China
| | - Sifan Song
- Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Xinxing Xu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore
| | - Yanyu Xu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore
| | - Wensai Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Lingxiao Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Shuai Lu
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
| | - Huiqi Li
- School of Medical Technology, Beijing Institute of Technology, Beijing, China; School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Shihua Huang
- Department of Computing, Hong Kong Polytechnic University, Hong Kong, China
| | - Zhichao Lu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Chubin Ou
- Weizhi Medical Technology Company, Suzhou, China
| | - Xifei Wei
- Weizhi Medical Technology Company, Suzhou, China
| | - Bingyuan Liu
- École de technologie supérieure, Montreal, Montreal, Canada
| | | | - Xiaoying Tang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Li Lin
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Qiang Zhou
- Suixin (Shanghai) Technology Co., Ltd., Shanghai, China
| | - Qiang Hu
- Suixin (Shanghai) Technology Co., Ltd., Shanghai, China
| | - Hrvoje Bogunović
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology, Medical University of Vienna, Austria
| | | | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China.
| | - Yanwu Xu
- South China University of Technology, Guangzhou, China; Pazhou Lab, Guangzhou, China.
| |
Collapse
|
3
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
4
|
Freiberg J, Welikala RA, Rovelt J, Owen CG, Rudnicka AR, Kolko M, Barman SA. Automated analysis of vessel morphometry in retinal images from a Danish high street optician setting. PLoS One 2023; 18:e0290278. [PMID: 37616264 PMCID: PMC10449151 DOI: 10.1371/journal.pone.0290278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 06/29/2023] [Indexed: 08/26/2023] Open
Abstract
PURPOSE To evaluate the test performance of the QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) software in detecting retinal features from retinal images captured by health care professionals in a Danish high street optician chain, compared with test performance from other large population studies (i.e., UK Biobank) where retinal images were captured by non-experts. METHOD The dataset FOREVERP (Finding Ophthalmic Risk and Evaluating the Value of Eye exams and their predictive Reliability, Pilot) contains retinal images obtained from a Danish high street optician chain. The QUARTZ algorithm utilizes both image processing and machine learning methods to determine retinal image quality, vessel segmentation, vessel width, vessel classification (arterioles or venules), and optic disc localization. Outcomes were evaluated by metrics including sensitivity, specificity, and accuracy and compared to human expert ground truths. RESULTS QUARTZ's performance was evaluated on a subset of 3,682 images from the FOREVERP database. 80.55% of the FOREVERP images were labelled as being of adequate quality compared to 71.53% of UK Biobank images, with a vessel segmentation sensitivity of 74.64% and specificity of 98.41% (FOREVERP) compared with a sensitivity of 69.12% and specificity of 98.88% (UK Biobank). The mean (± standard deviation) vessel width of the ground truth was 16.21 (4.73) pixels compared to that predicted by QUARTZ of 17.01 (4.49) pixels, resulting in a difference of -0.8 (1.96) pixels. The differences were stable across a range of vessels. The detection rate for optic disc localisation was similar for the two datasets. CONCLUSION QUARTZ showed high performance when evaluated on the FOREVERP dataset, and demonstrated robustness across datasets, providing validity to direct comparisons and pooling of retinal feature measures across data sources.
Collapse
Affiliation(s)
- Josefine Freiberg
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
| | - Roshan A. Welikala
- School of Computer Science and Mathematics, Kingston University, Surrey, United Kingdom
| | - Jens Rovelt
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
| | - Christopher G. Owen
- Population Health Research Institute, St. George’s, University of London, London, United Kingdom
| | - Alicja R. Rudnicka
- Population Health Research Institute, St. George’s, University of London, London, United Kingdom
| | - Miriam Kolko
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Department of Ophthalmology, Copenhagen University Hospital, Rigshospitalet, Glostrup, Copenhagen, Denmark
| | - Sarah A. Barman
- School of Computer Science and Mathematics, Kingston University, Surrey, United Kingdom
| | | |
Collapse
|
5
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
6
|
Heitmar R, Link D, Kotliar K, Schmidl D, Klee S. Editorial: Functional assessments of the ocular circulation. Front Med (Lausanne) 2023; 10:1222022. [PMID: 37359007 PMCID: PMC10285660 DOI: 10.3389/fmed.2023.1222022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Affiliation(s)
- Rebekka Heitmar
- Centre for Vision Across the Lifespan, School of Applied Sciences, University of Huddersfield, Huddersfield, United Kingdom
| | - Dietmar Link
- Division Optoelectrophysiological Engineering, Department of Computer Science and Automation, Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany
| | - Konstantin Kotliar
- Medical Engineering and Technomathematics, Aachen University of Applied Sciences, Aachen, Germany
| | - Doreen Schmidl
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
| | - Sascha Klee
- Division Optoelectrophysiological Engineering, Department of Computer Science and Automation, Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany
- Division Biostatistics and Data Science, Department General Health Studies, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| |
Collapse
|
7
|
Cirla A, Drigo M, Ballerini L, Trucco E, Barsotti G. Effects of pupil dilation with topical 0.5% tropicamide on retinal vascular parameters assessed by VAMPIRE® software in healthy cats. Res Vet Sci 2023; 160:50-54. [PMID: 37267768 DOI: 10.1016/j.rvsc.2023.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 05/09/2023] [Accepted: 05/16/2023] [Indexed: 06/04/2023]
Abstract
Our study investigates the effects of mydriasis obtained with topical 0.5% tropicamide on retinal vascular parameters evaluated in cats using the retinal imaging software: Vascular Assessment and Measurement Platform for Images of the Retina (VAMPIRE®). Forty client-owned healthy adult cats were included in the study. Topical 0.5% tropicamide was applied to dilate only the right pupil. The left eye was used as a control. Before dilation (T0), infrared pupillometry of both pupils was performed and fundus oculi images were taken from both eyes. Right eye fundus images were then captured 30 min after topical application of tropicamide (T30), when mydriasis was achieved. The retinal vessel widths (3 arteries and 3 veins) were measured with VAMPIRE® in four standard measurement areas (SMA) identified with the letters A, B, C, D. Average value of the 3 vessel widths was used. After normality assessment, the t-test was used to analyse the mean difference in vascular parameters of the left and right eyes at T0 and T30, with p set <0.05. The two eyes showed no statistical differences in pupil and vascular parameter measurements at T0. At T30, only one artery measurement of the right eye (SMA A-peripapillary area) showed a small but statistically significant mean vasoconstriction of approximately 4%. The results indicate that local application of 0.5% tropicamide seems to be associated with a small retinal arteriolar vasoconstriction as assessed by VAMPIRE® in cats. However, this change is minimal, and should not affect the interpretation of the results when VAMPIRE® is used.
Collapse
Affiliation(s)
- Alessandro Cirla
- Department of Ophthalmology, San Marco Veterinary Clinic and Laboratory, Veggiano, PD, Italy.
| | - Michele Drigo
- Department of Animal Medicine, Production and Health, University of Padova -, Legnaro, PD, Italy
| | | | - Emanuele Trucco
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, United Kingdom
| | - Giovanni Barsotti
- Department of Veterinary Science, University of Pisa, S. Piero a Grado, PI, Italy
| |
Collapse
|
8
|
Schanner C, Hautala N, Rauscher FG, Falck A. The impact of the image conversion factor and image centration on retinal vessel geometric characteristics. Front Med (Lausanne) 2023; 10:1112652. [PMID: 37007779 PMCID: PMC10063888 DOI: 10.3389/fmed.2023.1112652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 03/02/2023] [Indexed: 03/19/2023] Open
Abstract
BackgroundThis study aims to use fundus image material from a long-term retinopathy follow-up study to identify problems created by changing imaging modalities or imaging settings (e.g., image centering, resolution, viewing angle, illumination wavelength). Investigating the relationship of image conversion factor and imaging centering on retinal vessel geometric characteristics (RVGC), offers solutions for longitudinal retinal vessel analysis for data obtained in clinical routine.MethodsRetinal vessel geometric characteristics were analyzed in scanned fundus photographs with Singapore-I-Vessel-Assessment using a constant image conversion factor (ICF) and an individual ICF, applying them to macula centered (MC) and optic disk centered (ODC) images. The ICF is used to convert pixel measurements into μm for vessel diameter measurements and to establish the size of the measuring zone. Calculating a constant ICF, the width of all analyzed optic disks is included, and it is used for all images of a cohort. An individual ICF, in turn, uses the optic disk diameter of the eye analyzed. To investigate agreement, Bland-Altman mean difference was calculated between ODC images analyzed with individual and constant ICF and between MC and ODC images.ResultsWith constant ICF (n = 104 eyes of 52 patients) the mean central retinal equivalent was 160.9 ± 17.08 μm for arteries (CRAE) and 208.7 ± 14.7.4 μm for veins (CRVE). The individual ICFs resulted in a mean CRAE of 163.3 ± 15.6 μm and a mean CRVE of 219.0 ± 22.3 μm. On Bland–Altman analysis, the individual ICF RVGC are more positive, resulting in a positive mean difference for most investigated parameters. Arteriovenous ratio (p = 0.86), simple tortuosity (p = 0.08), and fractal dimension (p = 0.80) agreed well between MC and ODC images, while the vessel diameters were significantly smaller in MC images (p < 0.002).ConclusionScanned images can be analyzed using vessel assessment software. Investigations of individual ICF versus constant ICF point out the asset of utilizing an individual ICF. Image settings (ODC vs. MC) were shown to have good agreement.
Collapse
Affiliation(s)
- Carolin Schanner
- Department of Ophthalmology and Medical Research Center, Oulu University Hospital, Oulu, Finland
- PEDEGO Research Unit, University of Oulu, Oulu, Finland
- Institute for Medical Informatics, Statistics, and Epidemiology, Leipzig University, Leipzig, Germany
| | - Nina Hautala
- Department of Ophthalmology and Medical Research Center, Oulu University Hospital, Oulu, Finland
- PEDEGO Research Unit, University of Oulu, Oulu, Finland
| | - Franziska G. Rauscher
- Institute for Medical Informatics, Statistics, and Epidemiology, Leipzig University, Leipzig, Germany
| | - Aura Falck
- Department of Ophthalmology and Medical Research Center, Oulu University Hospital, Oulu, Finland
- PEDEGO Research Unit, University of Oulu, Oulu, Finland
- *Correspondence: Aura Falck,
| |
Collapse
|
9
|
Chákṣu: A glaucoma specific fundus image database. Sci Data 2023; 10:70. [PMID: 36737439 PMCID: PMC9898274 DOI: 10.1038/s41597-023-01943-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 01/06/2023] [Indexed: 02/05/2023] Open
Abstract
We introduce Chákṣu-a retinal fundus image database for the evaluation of computer-assisted glaucoma prescreening techniques. The database contains 1345 color fundus images acquired using three brands of commercially available fundus cameras. Each image is provided with the outlines for the optic disc (OD) and optic cup (OC) using smooth closed contours and a decision of normal versus glaucomatous by five expert ophthalmologists. In addition, segmentation ground-truths of the OD and OC are provided by fusing the expert annotations using the mean, median, majority, and Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. The performance indices show that the ground-truth agreement with the experts is the best with STAPLE algorithm, followed by majority, median, and mean. The vertical, horizontal, and area cup-to-disc ratios are provided based on the expert annotations. Image-wise glaucoma decisions are also provided based on majority voting among the experts. Chákṣu is the largest Indian-ethnicity-specific fundus image database with expert annotations and would aid in the development of artificial intelligence based glaucoma diagnostics.
Collapse
|
10
|
Domínguez C, Heras J, Mata E, Pascual V, Royo D, Zapata MÁ. Binary and multi-class automated detection of age-related macular degeneration using convolutional- and transformer-based architectures. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107302. [PMID: 36528999 DOI: 10.1016/j.cmpb.2022.107302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 09/05/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Age-related macular degeneration (AMD) is an eye disease that happens when ageing causes damage to the macula, and it is the leading cause of blindness in developed countries. Screening retinal fundus images allows ophthalmologists to early detect, diagnose and treat this disease; however, the manual interpretation of images is a time-consuming task. In this paper, we aim to study different deep learning methods to diagnose AMD. METHODS We have conducted a thorough study of two families of deep learning models based on convolutional neural networks (CNN) and transformer architectures to automatically diagnose referable/non-referable AMD, and grade AMD severity scales (no AMD, early AMD, intermediate AMD, and advanced AMD). In addition, we have analysed several progressive resizing strategies and ensemble methods for convolutional-based architectures to further improve the performance of the models. RESULTS As a first result, we have shown that transformer-based architectures obtain considerably worse results than convolutional-based architectures for diagnosing AMD. Moreover, we have built a model for diagnosing referable AMD that yielded a mean F1-score (SD) of 92.60% (0.47), a mean AUROC (SD) of 97.53% (0.40), and a mean weighted kappa coefficient (SD) of 85.28% (0.91); and an ensemble of models for grading AMD severity scales with a mean accuracy (SD) of 82.55% (2.92), and a mean weighted kappa coefficient (SD) of 84.76% (2.45). CONCLUSIONS This work shows that working with convolutional based architectures is more suitable than using transformer based models for classifying and grading AMD from retinal fundus images. Furthermore, convolutional models can be improved by means of progressive resizing strategies and ensemble methods.
Collapse
Affiliation(s)
- César Domínguez
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Jónathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, Spain.
| | - Eloy Mata
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Vico Pascual
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | | | - Miguel Ángel Zapata
- UPRetina, Barcelona, Spain; Hospital Vall Hebron, Passeig Roser 126, Sant Cugat del Vallés, 08195 Barcelona, Spain
| |
Collapse
|
11
|
Xue X, Wang L, Du W, Fujiwara Y, Peng Y. Multiple Preprocessing Hybrid Level Set Model for Optic Disc Segmentation in Fundus Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:6899. [PMID: 36146249 PMCID: PMC9506381 DOI: 10.3390/s22186899] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/08/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
The accurate segmentation of the optic disc (OD) in fundus images is a crucial step for the analysis of many retinal diseases. However, because of problems such as vascular occlusion, parapapillary atrophy (PPA), and low contrast, accurate OD segmentation is still a challenging task. Therefore, this paper proposes a multiple preprocessing hybrid level set model (HLSM) based on area and shape for OD segmentation. The area-based term represents the difference of average pixel values between the inside and outside of a contour, while the shape-based term measures the distance between a prior shape model and the contour. The average intersection over union (IoU) of the proposed method was 0.9275, and the average four-side evaluation (FSE) was 4.6426 on a public dataset with narrow-angle fundus images. The IoU was 0.8179 and the average FSE was 3.5946 on a wide-angle fundus image dataset compiled from a hospital. The results indicate that the proposed multiple preprocessing HLSM is effective in OD segmentation.
Collapse
Affiliation(s)
- Xiaozhong Xue
- Information and Human Science, Kyoto Institute of Technology University, Kyoto 6068585, Japan
| | - Linni Wang
- Retina & Neuron-Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin 300084, China
| | - Weiwei Du
- Information and Human Science, Kyoto Institute of Technology University, Kyoto 6068585, Japan
| | - Yusuke Fujiwara
- Information and Human Science, Kyoto Institute of Technology University, Kyoto 6068585, Japan
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| |
Collapse
|
12
|
Laurik-Feuerstein KL, Sapahia R, Cabrera DeBuc D, Somfai GM. The assessment of fundus image quality labeling reliability among graders with different backgrounds. PLoS One 2022; 17:e0271156. [PMID: 35881576 PMCID: PMC9321443 DOI: 10.1371/journal.pone.0271156] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 06/27/2022] [Indexed: 11/25/2022] Open
Abstract
PURPOSE For the training of machine learning (ML) algorithms, correctly labeled ground truth data are inevitable. In this pilot study, we assessed the performance of graders with different backgrounds in the labeling of retinal fundus image quality. METHODS Color fundus photographs were labeled using a Python-based tool using four image categories: excellent (E), good (G), adequate (A) and insufficient for grading (I). We enrolled 8 subjects (4 with and 4 without medical background, groups M and NM, respectively) to whom a tutorial was presented on image quality requirements. We randomly selected 200 images from a pool of 18,145 expert-labeled images (50/E, 50/G, 50/A, 50/I). The performance of the grading was timed and the agreement was assessed. An additional grading round was performed with 14 labels for a more objective analysis. RESULTS The median time (interquartile range) for the labeling task with 4 categories was 987.8 sec (418.6) for all graders and 872.9 sec (621.0) vs. 1019.8 sec (479.5) in the M vs. NM groups, respectively. Cohen's weighted kappa showed moderate agreement (0.564) when using four categories that increased to substantial (0.637) when using only three by merging the E and G groups. By the use of 14 labels, the weighted kappa values were 0.594 and 0.667 when assigning four or three categories, respectively. CONCLUSION Image grading with a Python-based tool seems to be a simple yet possibly efficient solution for the labeling of fundus images according to image quality that does not necessarily require medical background. Such grading can be subject to variability but could still effectively serve the robust identification of images with insufficient quality. This emphasizes the opportunity for the democratization of ML-applications among persons with both medical and non-medical background. However, simplicity of the grading system is key to successful categorization.
Collapse
Affiliation(s)
| | - Rishav Sapahia
- Miller School of Medicine, Bascom Palmer Eye Institute, University of Miami, Miami, Florida, United States of America
| | - Delia Cabrera DeBuc
- Miller School of Medicine, Bascom Palmer Eye Institute, University of Miami, Miami, Florida, United States of America
| | - Gábor Márk Somfai
- Department of Ophthalmology, Stadtspital Zürich, Zurich, Switzerland
- Spross Research Institute, Zurich, Switzerland
- Department of Ophthalmology, Semmelweis University, Budapest, Hungary
| |
Collapse
|
13
|
Abstract
Topological and geometrical analysis of retinal blood vessels could be a cost-effective way to detect various common diseases. Automated vessel segmentation and vascular tree analysis models require powerful generalization capability in clinical applications. In this work, we constructed a novel benchmark RETA with 81 labelled vessel masks aiming to facilitate retinal vessel analysis. A semi-automated coarse-to-fine workflow was proposed for vessel annotation task. During database construction, we strived to control inter-annotator and intra-annotator variability by means of multi-stage annotation and label disambiguation on self-developed dedicated software. In addition to binary vessel masks, we obtained other types of annotations including artery/vein masks, vascular skeletons, bifurcations, trees and abnormalities. Subjective and objective quality validations of the annotated vessel masks demonstrated significantly improved quality over the existing open datasets. Our annotation software is also made publicly available serving the purpose of pixel-level vessel visualization. Researchers could develop vessel segmentation algorithms and evaluate segmentation performance using RETA. Moreover, it might promote the study of cross-modality tubular structure segmentation and analysis.
Collapse
|
14
|
Retinal Glaucoma Public Datasets: What Do We Have and What Is Missing? J Clin Med 2022; 11:jcm11133850. [PMID: 35807135 PMCID: PMC9267177 DOI: 10.3390/jcm11133850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022] Open
Abstract
Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.
Collapse
|
15
|
DCNN-based prediction model for detection of age-related macular degeneration from color fundus images. Med Biol Eng Comput 2022; 60:1431-1448. [DOI: 10.1007/s11517-022-02542-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Accepted: 02/23/2022] [Indexed: 11/25/2022]
|
16
|
Frudd K, Sivaprasad S, Raman R, Krishnakumar S, Revathy YR, Turowski P. Diagnostic circulating biomarkers to detect vision-threatening diabetic retinopathy: Potential screening tool of the future? Acta Ophthalmol 2022; 100:e648-e668. [PMID: 34269526 DOI: 10.1111/aos.14954] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 06/02/2021] [Accepted: 06/17/2021] [Indexed: 12/12/2022]
Abstract
With the increasing prevalence of diabetes in developing and developed countries, the socio-economic burden of diabetic retinopathy (DR), the leading complication of diabetes, is growing. Diabetic retinopathy (DR) is currently one of the leading causes of blindness in working-age adults worldwide. Robust methodologies exist to detect and monitor DR; however, these rely on specialist imaging techniques and qualified practitioners. This makes detecting and monitoring DR expensive and time-consuming, which is particularly problematic in developing countries where many patients will be remote and have little contact with specialist medical centres. Diabetic retinopathy (DR) is largely asymptomatic until late in the pathology. Therefore, early identification and stratification of vision-threatening DR (VTDR) is highly desirable and will ameliorate the global impact of this disease. A simple, reliable and more cost-effective test would greatly assist in decreasing the burden of DR around the world. Here, we evaluate and review data on circulating protein biomarkers, which have been verified in the context of DR. We also discuss the challenges and developments necessary to translate these promising data into clinically useful assays, to detect VTDR, and their potential integration into simple point-of-care testing devices.
Collapse
Affiliation(s)
- Karen Frudd
- Institute of Ophthalmology University College London London UK
| | - Sobha Sivaprasad
- Institute of Ophthalmology University College London London UK
- NIHR Moorfields Biomedical Research Centre Moorfields Eye Hospital London UK
| | - Rajiv Raman
- Vision Research Foundation Sankara Nethralaya Chennai Tamil Nadu India
| | | | | | - Patric Turowski
- Institute of Ophthalmology University College London London UK
| |
Collapse
|
17
|
A Deep Learning Ensemble Method to Visual Acuity Measurement Using Fundus Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12063190] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Visual acuity (VA) is a measure of the ability to distinguish shapes and details of objects at a given distance and is a measure of the spatial resolution of the visual system. Vision is one of the basic health indicators closely related to a person’s quality of life. It is one of the first basic tests done when an eye disease develops. VA is usually measured by using a Snellen chart or E-chart from a specific distance. However, in some cases, such as the unconsciousness of patients or diseases, i.e., dementia, it can be impossible to measure the VA using such traditional chart-based methodologies. This paper provides a machine learning-based VA measurement methodology that determines VA only based on fundus images. In particular, the levels of VA, conventionally divided into 11 levels, are grouped into four classes and three machine learning algorithms, one SVM model and two CNN models, are combined into an ensemble method in order to predict the corresponding VA level from a fundus image. Based on a performance evaluation conducted using randomly selected 4000 fundus images, we confirm that our ensemble method can estimate with 82.4% of the average accuracy for four classes of VA levels, in which each class of Class 1 to Class 4 identifies the level of VA with 88.5%, 58.8%, 88%, and 94.3%, respectively. To the best of our knowledge, this is the first paper on VA measurements based on fundus images using deep machine learning.
Collapse
|
18
|
Mordi IR, Trucco E, Syed MG, MacGillivray T, Nar A, Huang Y, George G, Hogg S, Radha V, Prathiba V, Anjana RM, Mohan V, Palmer CNA, Pearson ER, Lang CC, Doney ASF. Prediction of Major Adverse Cardiovascular Events From Retinal, Clinical, and Genomic Data in Individuals With Type 2 Diabetes: A Population Cohort Study. Diabetes Care 2022; 45:710-716. [PMID: 35043139 DOI: 10.2337/dc21-1124] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 11/20/2021] [Indexed: 02/03/2023]
Abstract
OBJECTIVE Improved identification of individuals with type 2 diabetes at high cardiovascular (CV) risk could help in selection of newer CV risk-reducing therapies. The aim of this study was to determine whether retinal vascular parameters, derived from retinal screening photographs, alone and in combination with a genome-wide polygenic risk score for coronary heart disease (CHD PRS) would have independent prognostic value over traditional CV risk assessment in patients without prior CV disease. RESEARCH DESIGN AND METHODS Patients in the Genetics of Diabetes Audit and Research Tayside Scotland (GoDARTS) study were linked to retinal photographs, prescriptions, and outcomes. Retinal photographs were analyzed using VAMPIRE (Vascular Assessment and Measurement Platform for Images of the Retina) software, a semiautomated artificial intelligence platform, to compute arterial and venous fractal dimension, tortuosity, and diameter. CHD PRS was derived from previously published data. Multivariable Cox regression was used to evaluate the association between retinal vascular parameters and major adverse CV events (MACE) at 10 years compared with the pooled cohort equations (PCE) risk score. RESULTS Among 5,152 individuals included in the study, a MACE occurred in 1,017 individuals. Reduced arterial fractal dimension and diameter and increased venous tortuosity each independently predicted MACE. A risk score combining these parameters significantly predicted MACE after adjustment for age, sex, PCE, and the CHD PRS (hazard ratio 1.11 per SD increase, 95% CI 1.04-1.18, P = 0.002) with similar accuracy to PCE (area under the curve [AUC] 0.663 vs. 0.658, P = 0.33). A model incorporating retinal parameters and PRS improved MACE prediction compared with PCE (AUC 0.686 vs. 0.658, P < 0.001). CONCLUSIONS Retinal parameters alone and in combination with genome-wide CHD PRS have independent and incremental prognostic value compared with traditional CV risk assessment in type 2 diabetes.
Collapse
Affiliation(s)
- Ify R Mordi
- Division of Molecular and Clinical Medicine, School of Medicine, University of Dundee, Dundee, U.K
| | - Emanuele Trucco
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, U.K
| | - Mohammad Ghouse Syed
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, U.K
| | - Tom MacGillivray
- VAMPIRE Project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, U.K
| | - Adi Nar
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, U.K
| | - Yu Huang
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, U.K
| | - Gittu George
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, U.K
| | - Stephen Hogg
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, U.K
| | - Venkatesan Radha
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Chennai, India
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Chennai, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Chennai, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Chennai, India
| | - Colin N A Palmer
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, U.K
| | - Ewan R Pearson
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, U.K
| | - Chim C Lang
- Division of Molecular and Clinical Medicine, School of Medicine, University of Dundee, Dundee, U.K
| | - Alex S F Doney
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, U.K
| |
Collapse
|
19
|
Liu P, Tran CT, Kong B, Fang R. CADA: Multi-scale Collaborative Adversarial Domain Adaptation for unsupervised optic disc and cup segmentation. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.10.076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
Enache AE, Dietrich UM, Drury O, Trucco E, MacGillivray T, Syme H, Elliott J, Chang YM. Changes in retinal vascular diameters in senior and geriatric cats in association with variation in systemic blood pressure. J Feline Med Surg 2021; 23:1129-1139. [PMID: 33739170 DOI: 10.1177/1098612x21997629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVES Early diagnosis of arterial hypertension is essential to prevent target organ damage. In humans, retinal arteriolar narrowing predicts hypertension. This blinded prospective observational study investigated the retinal vessel diameters in senior and geriatric cats of varying systolic blood pressure (SBP) status and evaluated retinal vascular changes in hypertensive cats after treatment. METHODS Cats with a median age of 14 years (range 9.1-22 years) were categorised into five groups: group 1, healthy normotensive (SBP <140 mmHg; n = 40) cats; group 2, pre-hypertensive (SBP 140-160 mmHg; n = 14) cats; group 3, cats with chronic kidney disease (CKD) and normotensive (n = 26); group 4, cats with CKD and pre-hypertensive (n = 13); and group 5, hypertensive cats (SBP >160 mmHg, n = 15). Colour fundus images (Optibrand ClearView) were assessed for hypertensive lesions. Retinal vascular diameters and bifurcation angles were annotated and calculated using the Vascular Assessment and Measurement Platform for Images of the Retina annotation tool (VAMPIRE-AT). When available, measurements were obtained at 3 and 6 months after amlodipine besylate treatment. RESULTS Ten hypertensive cats had retinal lesions, most commonly intraretinal haemorrhages and retinal exudates. Arteriole and venule diameters decreased significantly with increasing age (-0.17 ± 0.05 pixels/year [P = 0.0004]; -0.19 ± 0.05 pixels/year). Adjusted means ± SEM for arteriole and venule diameter (pixels) were 6.3 ± 0.2 and 8.9 ± 0.2 (group 1); 7.6 ± 0.3 and 10.1 ± 0.4 (group 2); 6.9 ± 0.2 and 9.5 ± 0.3 (group 3); 7.4 ± 0.3 and 10.0 ± 0.4 (group 4); and 7.0 ± 0.3 and 9.8 ± 0.4 (group 5). Group 1 arteriole and venule diameters were significantly lower than those of groups 2 and 4. Group 2 arteriole bifurcation angle was significantly narrower than those of groups 1 and 3. Post-treatment, vessel diameters decreased significantly at 3 and 6 months in seven hypertensive cats. CONCLUSIONS AND RELEVANCE Increased age was associated with reduced vascular diameters. Longitudinal studies are required to assess if vessel diameters are a risk indicator for hypertension in cats.
Collapse
Affiliation(s)
- Andra-Elena Enache
- North Downs, Specialist Referrals, 3 & 4 The Brewerstreet Dairy Business Park, Brewer Street, Bletchingley, UK
| | | | - Oscar Drury
- Department of Comparative Biomedical Sciences, Royal Veterinary College, London, UK
| | - Emanuele Trucco
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| | - Tom MacGillivray
- VAMPIRE Project, Centre for Clinical Brain Sciences, University of Edinburgh, UK
| | - Harriet Syme
- Department of Clinical Sciences and Services, Queen Mother Hospital for Animals, Royal Veterinary College, London, UK
| | - Jonathan Elliott
- Department of Comparative Biomedical Sciences, Royal Veterinary College, London, UK
| | - Yu-Mei Chang
- Department of Comparative Biomedical Sciences, Royal Veterinary College, London, UK
| |
Collapse
|
21
|
Li Z, Jiang J, Qiang W, Guo L, Liu X, Weng H, Wu S, Zheng Q, Chen W. Comparison of deep learning systems and cornea specialists in detecting corneal diseases from low-quality images. iScience 2021; 24:103317. [PMID: 34778732 PMCID: PMC8577078 DOI: 10.1016/j.isci.2021.103317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/11/2021] [Accepted: 10/15/2021] [Indexed: 01/01/2023] Open
Abstract
The performance of deep learning in disease detection from high-quality clinical images is identical to and even greater than that of human doctors. However, in low-quality images, deep learning performs poorly. Whether human doctors also have poor performance in low-quality images is unknown. Here, we compared the performance of deep learning systems with that of cornea specialists in detecting corneal diseases from low-quality slit lamp images. The results showed that the cornea specialists performed better than our previously established deep learning system (PEDLS) trained on only high-quality images. The performance of the system trained on both high- and low-quality images was superior to that of the PEDLS while inferior to that of a senior corneal specialist. This study highlights that cornea specialists perform better in low-quality images than the system trained on high-quality images. Adding low-quality images with sufficient diagnostic certainty to the training set can reduce this performance gap. Deep learning performs poorly in low-quality images for detecting corneal diseases Corneal specialists perform better than the PEDLS in low-quality images The performance of the NDLS is better than that of the PEDLS in low-quality images Adding low-quality images to the training set can improve the system's performance
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Liufei Guo
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| |
Collapse
|
22
|
Peroni A, Paviotti A, Campigotto M, Abegão Pinto L, Cutolo CA, Shi Y, Cobb C, Gong J, Patel S, Gillan S, Tatham A, Trucco E. On Clinical Agreement on the Visibility and Extent of Anatomical Layers in Digital Gonio Photographs. Transl Vis Sci Technol 2021; 10:1. [PMID: 34468695 PMCID: PMC8419881 DOI: 10.1167/tvst.10.11.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 08/10/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose To quantitatively evaluate the inter-annotator variability of clinicians tracing the contours of anatomical layers of the iridocorneal angle on digital gonio photographs, thus providing a baseline for the validation of automated analysis algorithms. Methods Using a software annotation tool on a common set of 20 images, five experienced ophthalmologists highlighted the contours of five anatomical layers of interest: iris root (IR), ciliary body band (CBB), scleral spur (SS), trabecular meshwork (TM), and cornea (C). Inter-annotator variability was assessed by (1) comparing the number of times ophthalmologists delineated each layer in the dataset; (2) quantifying how the consensus area for each layer (i.e., the intersection area of observers' delineations) varied with the consensus threshold; and (3) calculating agreement among annotators using average per-layer precision, sensitivity, and Dice score. Results The SS showed the largest difference in annotation frequency (31%) and the minimum overall agreement in terms of consensus size (∼28% of the labeled pixels). The average annotator's per-layer statistics showed consistent patterns, with lower agreement on the CBB and SS (average Dice score ranges of 0.61-0.7 and 0.73-0.78, respectively) and better agreement on the IR, TM, and C (average Dice score ranges of 0.97-0.98, 0.84-0.9, and 0.93-0.96, respectively). Conclusions There was considerable inter-annotator variation in identifying contours of some anatomical layers in digital gonio photographs. Our pilot indicates that agreement was best on IR, TM, and C but poorer for CBB and SS. Translational Relevance This study provides a comprehensive description of inter-annotator agreement on digital gonio photographs segmentation as a baseline for validating deep learning models for automated gonioscopy.
Collapse
Affiliation(s)
- Andrea Peroni
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| | | | | | | | | | - Yue Shi
- Doheny Image Reading Center, Doheny Eye Institute, Los Angeles, CA, USA
| | - Caroline Cobb
- Department of Ophthalmology, Ninewells Hospital, NHS Tayside, Dundee, UK
| | - Jacintha Gong
- Department of Ophthalmology, Ninewells Hospital, NHS Tayside, Dundee, UK
| | - Sirjhun Patel
- Department of Ophthalmology, Ninewells Hospital, NHS Tayside, Dundee, UK
| | - Stewart Gillan
- Department of Ophthalmology, Ninewells Hospital, NHS Tayside, Dundee, UK
| | - Andrew Tatham
- Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| | - Emanuele Trucco
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| |
Collapse
|
23
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|
24
|
Hajiyavand AM, Graham MJ, Dearn KD. Diameter Estimation of Fallopian Tubes Using Visual Sensing. BIOSENSORS-BASEL 2021; 11:bios11040100. [PMID: 33915708 PMCID: PMC8066605 DOI: 10.3390/bios11040100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/10/2021] [Accepted: 03/17/2021] [Indexed: 11/16/2022]
Abstract
Calculating an accurate diameter of arbitrary vessel-like shapes from 2D images is of great use in various applications within medical and biomedical fields. Understanding the changes in morphological dimensioning of the biological vessels provides a better understanding of their properties and functionality. Estimating the diameter of the tubes is very challenging as the dimensions change continuously along its length. This paper describes a novel algorithm that estimates the diameter of biological tubes with a continuously changing cross-section. The algorithm, evaluated using various controlled images, provides an automated diameter estimation with higher and better accuracy than manual measurements and provides precise information about the diametrical changes along the tube. It is demonstrated that the automated algorithm provides more accurate results in a much shorter time. This methodology has the potential to speed up diagnostic procedures in a wide range of medical fields.
Collapse
|
25
|
Mookiah MRK, Hogg S, MacGillivray T, Trucco E. On the quantitative effects of compression of retinal fundus images on morphometric vascular measurements in VAMPIRE. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:105969. [PMID: 33631639 DOI: 10.1016/j.cmpb.2021.105969] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 01/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES This paper reports a quantitative analysis of the effects of joint photographic experts group (JPEG) image compression of retinal fundus camera images on automatic vessel segmentation and on morphometric vascular measurements derived from it, including vessel width, tortuosity and fractal dimension. METHODS Measurements are computed with vascular assessment and measurement platform for images of the retina (VAMPIRE), a specialized software application adopted in many international studies on retinal biomarkers. For reproducibility, we use three public archives of fundus images (digital retinal images for vessel extraction (DRIVE), automated retinal image analyzer (ARIA), high-resolution fundus (HRF)). We generate compressed versions of original images in a range of representative levels. RESULTS We compare the resulting vessel segmentations with ground truth maps and morphological measurements of the vascular network with those obtained from the original (uncompressed) images. We assess the segmentation quality with sensitivity, specificity, accuracy, area under the curve and Dice coefficient. We assess the agreement between VAMPIRE measurements from compressed and uncompressed images with correlation, intra-class correlation and Bland-Altman analysis. CONCLUSIONS Results suggest that VAMPIRE width-related measurements (central retinal artery equivalent (CRAE), central retinal vein equivalent (CRVE), arteriolar-venular width ratio (AVR)), the fractal dimension (FD) and arteriolar tortuosity have excellent agreement with those from the original images, remaining substantially stable even for strong loss of quality (20% of the original), suggesting the suitability of VAMPIRE in association studies with compressed images.
Collapse
|
26
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
27
|
Muhiddin HS, Panggalo I, Ichsan AM, Budu, Trucco E, Ellis J. Retinal vascular caliber changes after laser photocoagulation in diabetic retinopathy. MEDICAL JOURNAL OF INDONESIA 2020. [DOI: 10.13181/mji.oa.203806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
BACKGROUND Diabetic retinopathy causes vascular dilatation caused by hypoxia, whereas oxygen tension improvement leads to retinal vessels narrowing. Given that laser photocoagulation aims to increase the oxygen tension in the retina, we hypothesized that the narrowing of vessel caliber after the treatment could be possibly demonstrated. This study aimed to assess the changes in the caliber of retinal vessels before and after laser photocoagulation in diabetic retinopathy.
METHODS This research was a prospective cohort study on the treatment of diabetic retinopathy by laser photocoagulation, and it was conducted at Universitas Hasanuddin Hospital, Makassar, Indonesia between November 2017–April 2018. Retinal vascular caliber changes were analyzed before and 6–8 weeks after photocoagulation in 30 diabetic eyes. Central retinal arteriolar equivalent (CRAE) and central retinal venular equivalent (CRVE) were measured using the vessel assessment and measurement platform software for images of the retina (VAMPIRE) manual annotation tool.
RESULTS A significant decrease of CRVE was observed after laser photocoagulation (p<0.001), but CRAE was not reduced significantly (p = 0.067). No difference was recorded between CRVE and CRAE post-laser photocoagulation (p = 0.14), implying a reduction in vein caliber toward normal in the treated eyes.
CONCLUSIONS Laser photocoagulation decreases the CRVE in diabetic retinopathy despite the absence of changes in the grade of diabetic retinopathy.
Collapse
|
28
|
Toliušis R, Kurasova O, Bernatavičienė J. Semantic Segmentation of Eye Fundus Images Using Convolutional Neural Networks. INFORMACIJOS MOKSLAI 2020. [DOI: 10.15388/im.2020.90.53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The article reviews the problems of eye bottom fundus analysis and semantic segmentation algorithms used to distinguish eye vessels, optical disk. Various diseases, such as glaucoma, hypertension, diabetic retinopathy, macular degeneration, etc., can be diagnosed by changes and anomalies of vesssels and optical disk. For semantic segmentation convolutional neural networks, especially U-Net architecture, are well suited. Recently a number of U-Net modifications have been developed that deliver excellent performance results.
Collapse
|
29
|
Li Z, Jiang J, Zhou H, Zheng Q, Liu X, Chen K, Weng H, Chen W. Development of a deep learning-based image eligibility verification system for detecting and filtering out ineligible fundus images: A multicentre study. Int J Med Inform 2020; 147:104363. [PMID: 33388480 DOI: 10.1016/j.ijmedinf.2020.104363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 12/07/2020] [Accepted: 12/08/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Recent advances in artificial intelligence (AI) have shown great promise in detecting some diseases based on medical images. Most studies developed AI diagnostic systems only using eligible images. However, in real-world settings, ineligible images (including poor-quality and poor-location images) that can compromise downstream analysis are inevitable, leading to uncertainty about the performance of these AI systems. This study aims to develop a deep learning-based image eligibility verification system (DLIEVS) for detecting and filtering out ineligible fundus images. METHODS A total of 18,031 fundus images (9,188 subjects) collected from 4 clinical centres were used to develop and evaluate the DLIEVS for detecting eligible, poor-location, and poor-quality fundus images. Four deep learning algorithms (AlexNet, DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best model for the DLIEVS. The performance of the DLIEVS was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard determined by retina experts. RESULTS In the internal test dataset, the best algorithm (DenseNet121) achieved AUCs of 1.000, 0.999, and 1.000 for the classification of eligible, poor-location, and poor-quality images, respectively. In the external test datasets, the AUCs of the best algorithm (DenseNet121) for detecting eligible, poor-location, and poor-quality images were ranged from 0.999-1.000, 0.997-1.000, and 0.997-0.999, respectively. CONCLUSIONS Our DLIEVS can accurately discriminate poor-quality and poor-location images from eligible images. This system has the potential to serve as a pre-screening technique to filter out ineligible images obtained from real-world settings, ensuring only eligible images will be applied in the subsequent image-based AI diagnostic analyses.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jiewei Jiang
- School of Electronics Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China
| | - Heding Zhou
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Kuan Chen
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
30
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
31
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhao L, Wu X, Dongye M, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Deep learning from "passive feeding" to "selective eating" of real-world data. NPJ Digit Med 2020; 3:143. [PMID: 33145439 PMCID: PMC7603327 DOI: 10.1038/s41746-020-00350-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 09/24/2020] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) based on deep learning has shown excellent diagnostic performance in detecting various diseases with good-quality clinical images. Recently, AI diagnostic systems developed from ultra-widefield fundus (UWF) images have become popular standard-of-care tools in screening for ocular fundus diseases. However, in real-world settings, these systems must base their diagnoses on images with uncontrolled quality ("passive feeding"), leading to uncertainty about their performance. Here, using 40,562 UWF images, we develop a deep learning-based image filtering system (DLIFS) for detecting and filtering out poor-quality images in an automated fashion such that only good-quality images are transferred to the subsequent AI diagnostic system ("selective eating"). In three independent datasets from different clinical institutions, the DLIFS performed well with sensitivities of 96.9%, 95.6% and 96.6%, and specificities of 96.6%, 97.9% and 98.8%, respectively. Furthermore, we show that the application of our DLIFS significantly improves the performance of established AI diagnostic systems in real-world settings. Our work demonstrates that "selective eating" of real-world data is necessary and needs to be considered in the development of image-based AI systems.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, 518001 Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Centre, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, 015000 Inner Mongolia, China
| | - Yu Han
- EYE and ENT Hospital of Fudan University, 200031 Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
- Centre for Precision Medicine, Sun Yat-sen University, 510060 Guangzhou, China
| |
Collapse
|
32
|
Bisneto TRV, de Carvalho Filho AO, Magalhães DMV. Generative adversarial network and texture features applied to automatic glaucoma detection. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106165] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
33
|
Lim G, Bellemo V, Xie Y, Lee XQ, Yip MYT, Ting DSW. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: a review. EYE AND VISION (LONDON, ENGLAND) 2020; 7:21. [PMID: 32313813 PMCID: PMC7155252 DOI: 10.1186/s40662-020-00182-7] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/10/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND Effective screening is a desirable method for the early detection and successful treatment for diabetic retinopathy, and fundus photography is currently the dominant medium for retinal imaging due to its convenience and accessibility. Manual screening using fundus photographs has however involved considerable costs for patients, clinicians and national health systems, which has limited its application particularly in less-developed countries. The advent of artificial intelligence, and in particular deep learning techniques, has however raised the possibility of widespread automated screening. MAIN TEXT In this review, we first briefly survey major published advances in retinal analysis using artificial intelligence. We take care to separately describe standard multiple-field fundus photography, and the newer modalities of ultra-wide field photography and smartphone-based photography. Finally, we consider several machine learning concepts that have been particularly relevant to the domain and illustrate their usage with extant works. CONCLUSIONS In the ophthalmology field, it was demonstrated that deep learning tools for diabetic retinopathy show clinically acceptable diagnostic performance when using colour retinal fundus images. Artificial intelligence models are among the most promising solutions to tackle the burden of diabetic retinopathy management in a comprehensive manner. However, future research is crucial to assess the potential clinical deployment, evaluate the cost-effectiveness of different DL systems in clinical practice and improve clinical acceptance.
Collapse
Affiliation(s)
- Gilbert Lim
- School of Computing, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Valentina Bellemo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| | - Yuchen Xie
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Xin Q. Lee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Michelle Y. T. Yip
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| | - Daniel S. W. Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
- Vitreo-Retinal Service, Singapore National Eye Center, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
- Artificial Intelligence in Ophthalmology, Singapore Eye Research Institute, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| |
Collapse
|
34
|
Palanisamy G, Shankar NB, Ponnusamy P, Gopi VP. A hybrid feature preservation technique based on luminosity and edge based contrast enhancement in color fundus images. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.02.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
35
|
Vujosevic S, Aldington SJ, Silva P, Hernández C, Scanlon P, Peto T, Simó R. Screening for diabetic retinopathy: new perspectives and challenges. Lancet Diabetes Endocrinol 2020; 8:337-347. [PMID: 32113513 DOI: 10.1016/s2213-8587(19)30411-5] [Citation(s) in RCA: 226] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 11/18/2019] [Accepted: 11/18/2019] [Indexed: 12/15/2022]
Abstract
Although the prevalence of all stages of diabetic retinopathy has been declining since 1980 in populations with improved diabetes control, the crude prevalence of visual impairment and blindness caused by diabetic retinopathy worldwide increased between 1990 and 2015, largely because of the increasing prevalence of type 2 diabetes, particularly in low-income and middle-income countries. Screening for diabetic retinopathy is essential to detect referable cases that need timely full ophthalmic examination and treatment to avoid permanent visual loss. In the past few years, personalised screening intervals that take into account several risk factors have been proposed, with good cost-effectiveness ratios. However, resources for nationwide screening programmes are scarce in many countries. New technologies, such as scanning confocal ophthalmology with ultrawide field imaging and handheld mobile devices, teleophthalmology for remote grading, and artificial intelligence for automated detection and classification of diabetic retinopathy, are changing screening strategies and improving cost-effectiveness. Additionally, emerging evidence suggests that retinal imaging could be useful for identifying individuals at risk of cardiovascular disease or cognitive impairment, which could expand the role of diabetic retinopathy screening beyond the prevention of sight-threatening disease.
Collapse
Affiliation(s)
- Stela Vujosevic
- Eye Unit, University Hospital Maggiore della Carità, Novara, Italy
| | - Stephen J Aldington
- Department of Ophthalmology, Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, UK
| | - Paolo Silva
- Beetham Eye Institute, Joslin Diabetes Centre, Harvard Medical School, Boston, MA, USA; Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
| | - Cristina Hernández
- Diabetes and Metabolism Research Unit, Vall d'Hebron Research Institute, Barcelona, Spain; Department of Medicine and Endocrinology, Autonomous University of Barcelona, Barcelona, Spain; Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas, Instituto de Salud Carlos III, Madrid, Spain
| | - Peter Scanlon
- Department of Ophthalmology, Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, UK
| | - Tunde Peto
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Rafael Simó
- Diabetes and Metabolism Research Unit, Vall d'Hebron Research Institute, Barcelona, Spain; Department of Medicine and Endocrinology, Autonomous University of Barcelona, Barcelona, Spain; Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas, Instituto de Salud Carlos III, Madrid, Spain.
| |
Collapse
|
36
|
Yu Q, Wang F, Zhou L, Yang J, Liu K, Xu X. Quantification of Diabetic Retinopathy Lesions in DME Patients With Intravitreal Conbercept Treatment Using Deep Learning. Ophthalmic Surg Lasers Imaging Retina 2020; 51:95-100. [PMID: 32084282 DOI: 10.3928/23258160-20200129-05] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 09/10/2019] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVES To quantitatively evaluate diabetic retinopathy (DR) lesions using the authors' validated machine learning algorithms and provide physicians with an automated and precise method to follow the progression of DR and outcome of interventions. PATIENTS AND METHODS Retrospective analyses were conducted of 3,496 color fundus photography images from 19 patients with clinically significant diabetic macular edema receiving conbercept treatment. The modified seven-field fundus images were obtained at baseline and at the third, sixth, and twelfth month visit, whereas the modified two-field fundus images were obtained at the other monthly visits. The area of intraretinal hemorrhage and hard exudate lesions was traced by the authors' validated algorithms. RESULTS The mean central foveal thickness at baseline was 459.9 μm ± 127.5 μm. Mean central foveal thickness was 316.5 μm ± 53.0 μm at the twelfth month visit, which decreased by 143.4 μm when compared with the baseline optical coherence tomography. The mean total area of intraretinal hemorrhage in the study eye in seven fields was 5.656 ± 1.176 mm2 at baseline, 2.438 ± 0.976 mm2 at the third month, 2.901 ± 0.521 mm2 at the sixth month, and 2.122 ± 0.582 mm2 at the end of the study. The area of intraretinal hemorrhage was reduced by 62.49% from baseline to the end of study (P < .0001). The mean total area of hard exudates in the study eye was 2.549 ± 0.776 mm2 at baseline, 2.233 ± 0.576 mm2 at the third month, 2.710 ± 0.621 mm2 at the sixth month, and 1.473 ± 0.564 mm2 at the end of the study. The mean total area of hard exudates decreased by 41.1% at the twelfth month (P < .0001) compared with the first visit. Significant decrease was observed in the area of intraretinal hemorrhage during conbercept treatment. The hard exudates area fluctuated during loading then subsequently decreased at the twelfth month. CONCLUSIONS The present study quantitatively analyzed the change in the area change of intraretinal hemorrhage and hard exudate lesions during the course of conbercept treatment. The automated system is promising to be a precise and objective method for monitoring the progression of DR and outcomes of interventions in clinical settings. [Ophthalmic Surg Lasers Imaging Retina. 2020;51:95-100.].
Collapse
|
37
|
Orlando JI, Fu H, Barbosa Breda J, van Keer K, Bathula DR, Diaz-Pinto A, Fang R, Heng PA, Kim J, Lee J, Lee J, Li X, Liu P, Lu S, Murugesan B, Naranjo V, Phaye SSR, Shankaranarayana SM, Sikka A, Son J, van den Hengel A, Wang S, Wu J, Wu Z, Xu G, Xu Y, Yin P, Li F, Zhang X, Xu Y, Bogunović H. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal 2020; 59:101570. [DOI: 10.1016/j.media.2019.101570] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 07/26/2019] [Accepted: 10/01/2019] [Indexed: 01/01/2023]
|
38
|
Schaekermann M, Hammel N, Terry M, Ali TK, Liu Y, Basham B, Campana B, Chen W, Ji X, Krause J, Corrado GS, Peng L, Webster DR, Law E, Sayres R. Remote Tool-Based Adjudication for Grading Diabetic Retinopathy. Transl Vis Sci Technol 2019; 8:40. [PMID: 31867141 PMCID: PMC6922270 DOI: 10.1167/tvst.8.6.40] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 10/12/2019] [Indexed: 01/01/2023] Open
Abstract
Purpose To present and evaluate a remote, tool-based system and structured grading rubric for adjudicating image-based diabetic retinopathy (DR) grades. Methods We compared three different procedures for adjudicating DR severity assessments among retina specialist panels, including (1) in-person adjudication based on a previously described procedure (Baseline), (2) remote, tool-based adjudication for assessing DR severity alone (TA), and (3) remote, tool-based adjudication using a feature-based rubric (TA-F). We developed a system allowing graders to review images remotely and asynchronously. For both TA and TA-F approaches, images with disagreement were reviewed by all graders in a round-robin fashion until disagreements were resolved. Five panels of three retina specialists each adjudicated a set of 499 retinal fundus images (1 panel using Baseline, 2 using TA, and 2 using TA-F adjudication). Reliability was measured as grade agreement among the panels using Cohen's quadratically weighted kappa. Efficiency was measured as the number of rounds needed to reach a consensus for tool-based adjudication. Results The grades from remote, tool-based adjudication showed high agreement with the Baseline procedure, with Cohen's kappa scores of 0.948 and 0.943 for the two TA panels, and 0.921 and 0.963 for the two TA-F panels. Cases adjudicated using TA-F were resolved in fewer rounds compared with TA (P < 0.001; standard permutation test). Conclusions Remote, tool-based adjudication presents a flexible and reliable alternative to in-person adjudication for DR diagnosis. Feature-based rubrics can help accelerate consensus for tool-based adjudication of DR without compromising label quality. Translational Relevance This approach can generate reference standards to validate automated methods, and resolve ambiguous diagnoses by integrating into existing telemedical workflows.
Collapse
Affiliation(s)
- Mike Schaekermann
- Google AI Healthcare, Google LLC, Mountain View, CA, USA.,University of Waterloo, Waterloo, ON, Canada
| | - Naama Hammel
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - Michael Terry
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - Tayyeba K Ali
- Work done at Google Health via Advanced Clinical (corporate headquarters: Deerfield, IL, USA)
| | - Yun Liu
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - Brian Basham
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - Bilson Campana
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - William Chen
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - Xiang Ji
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | | | - Greg S Corrado
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - Lily Peng
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - Dale R Webster
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| | - Edith Law
- University of Waterloo, Waterloo, ON, Canada
| | - Rory Sayres
- Google AI Healthcare, Google LLC, Mountain View, CA, USA
| |
Collapse
|
39
|
Zhang H, Niu K, Xiong Y, Yang W, He Z, Song H. Automatic cataract grading methods based on deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 182:104978. [PMID: 31450174 DOI: 10.1016/j.cmpb.2019.07.006] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 06/20/2019] [Accepted: 07/04/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The shortage of ophthalmologists in rural areas in China causes a lot of cataract patients not getting timely diagnosis and effective treatment. We develop an algorithm and platform to automatically diagnose and grade cataract based on fundus images of patients. This method can help government assisting poor population more accurately. METHODS The novel six-level cataract grading method proposed in this paper focuses on the multi-feature fusion based on stacking. We extract two kinds of features which can effectively distinguish different levels of cataract. One is high-level features extracted from residual network (ResNet18). The other is texture features extarcted by gray level co-occurrence matrix (GLCM). Then a frame is proposed to automatically grade cataract by the extracted features. In the frame, two support vector machine (SVM) classifiers are used as base-learners to obtain the probability outputs of each fundus image, and fully connected neural network (FCNN) are used as meta-learner to output the final classification result, which consists of two fully-connected layers. RESULT The accuracy of six-level grading achieved by the proposed method is up to 92.66% on average, the highest of which reaches 93.33%. The proposed method achieves 94.75% accuracy on four-level grading for cataract, which is at least 1.75% higher than those of the exiting methods. CONCLUSIONS Six-category cataract classification algorithm show that Multi-feature & Stacking proposed in this paper helps achieve higher grading performance and lower volatility than grading using high-level features and texture features respectively. We also apply our algorithm into four-level cataract grading system and it shows higher accuracy compared with previous reports.
Collapse
Affiliation(s)
- Hongyan Zhang
- Beijing Tongren Eye Center, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and visual Sciences, National Engineering Research Center for Ophthalmology, Beijing, China
| | - Kai Niu
- Key Laboratory of Universal Wireless Communations, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Yanmin Xiong
- Key Laboratory of Universal Wireless Communations, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Weihua Yang
- The First People's Hospital of Huzhou, Huzhou, Zhejiang, China
| | - ZhiQiang He
- Key Laboratory of Universal Wireless Communations, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China; College of Big Data and Information Engineering, Guizhou University, Guizhou, China.
| | - Hongxin Song
- Beijing Tongren Eye Center, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and visual Sciences, National Engineering Research Center for Ophthalmology, Beijing, China.
| |
Collapse
|
40
|
Toliušis R, Kurasova O, Bernatavičienė J. Semantic Segmentation of Eye Fundus Images Using Convolutional Neural Networks. INFORMACIJOS MOKSLAI 2019. [DOI: 10.15388/im.2019.85.20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
This article reviews the problems of eye bottom fundus analysis and semantic segmentation algorithms used to distinguish the eye vessels and the optical disk. Various diseases, such as glaucoma, hypertension, diabetic retinopathy, macular degeneration, etc., can be diagnosed through changes and anomalies of the vesssels and optical disk. Convolutional neural networks, especially the U-Net architecture, are well-suited for semantic segmentation. A number of U-Net modifications have been recently developed that deliver excellent performance results.
Collapse
|
41
|
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, Mériaudeau F. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge. Med Image Anal 2019; 59:101561. [PMID: 31671320 DOI: 10.1016/j.media.2019.101561] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 09/09/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023]
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Collapse
Affiliation(s)
- Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA.
| | - Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | | | | | | | - Lihong Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - Xinhui Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - TianBo Wu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | | | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Linsheng He
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Yoon Ho Choi
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Yeong Chan Lee
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang-Hyuk Jung
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, China
| | - Junyan Wu
- Cleerly Inc., New York, United States
| | | | - Ting Zhou
- University at Buffalo, New York, United States
| | - Janos Toth
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Agnes Baran
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | | | | | | | | | - Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China; Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore
| | - Li Cheng
- Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore; Department of Electric and Computer Engineering, University of Alberta, Canada
| | - Qinhao Chu
- School of Computing, National University of Singapore, Singapore
| | - Pengcheng Li
- School of Computing, National University of Singapore, Singapore
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd., China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yaxin Shen
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | | | | | - Tânia Melo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Balazs Harangi
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, USA
| | | | - Andras Hajdu
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, China
| | - Ana Maria Mendonça
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Luca Giancardo
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | | | - Fabrice Mériaudeau
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Malaysia; ImViA/IFTIM, Université de Bourgogne, Dijon, France
| |
Collapse
|
42
|
McGrory S, Ballerini L, Doubal FN, Staals J, Allerhand M, Valdes-Hernandez MDC, Wang X, MacGillivray T, Doney ASF, Dhillon B, Starr JM, Bastin ME, Trucco E, Deary IJ, Wardlaw JM. Retinal microvasculature and cerebral small vessel disease in the Lothian Birth Cohort 1936 and Mild Stroke Study. Sci Rep 2019; 9:6320. [PMID: 31004095 PMCID: PMC6474900 DOI: 10.1038/s41598-019-42534-x] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 03/28/2019] [Indexed: 01/06/2023] Open
Abstract
Research has suggested that the retinal vasculature may act as a surrogate marker for diseased cerebral vessels. Retinal vascular parameters were measured using Vessel Assessment and Measurement Platform for Images of the Retina (VAMPIRE) software in two cohorts: (i) community-dwelling older subjects of the Lothian Birth Cohort 1936 (n = 603); and (ii) patients with recent minor ischaemic stroke of the Mild Stroke Study (n = 155). Imaging markers of small vessel disease (SVD) (white matter hyperintensities [WMH] on structural MRI, visual scores and volume; perivascular spaces; lacunes and microbleeds), and vascular risk measures were assessed in both cohorts. We assessed associations between retinal and brain measurements using structural equation modelling and regression analysis. In the Lothian Birth Cohort 1936 arteriolar fractal dimension accounted for 4% of the variance in WMH load. In the Mild Stroke Study lower arteriolar fractal dimension was associated with deep WMH scores (odds ratio [OR] 0.53; 95% CI, 0.32–0.87). No other retinal measure was associated with SVD. Reduced fractal dimension, a measure of vascular complexity, is related to SVD imaging features in older people. The results provide some support for the use of the retinal vasculature in the study of brain microvascular disease.
Collapse
Affiliation(s)
- Sarah McGrory
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK. .,Department of Psychology, University of Edinburgh, Edinburgh, UK.
| | - Lucia Ballerini
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Fergus N Doubal
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Julie Staals
- Department of Neurology, Maastricht University Medical Center, Maastricht, The Netherlands.,Cardiovascular Research Institute Maastricht (CARIM), Maastricht University, Maastricht, The Netherlands
| | - Mike Allerhand
- Department of Psychology, University of Edinburgh, Edinburgh, UK.,Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, UK
| | | | - Xin Wang
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Tom MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Alex S F Doney
- Division of Cardiovascular and Diabetes Medicine, Medical Research Institute, Ninewells Hospital and Medical School, Dundee, UK
| | - Baljean Dhillon
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - John M Starr
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, UK.,Alzheimer Scotland Dementia Research Centre, University of Edinburgh, Edinburgh, UK
| | - Mark E Bastin
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, UK.,Scottish Imaging Network, a Platform for Scientific Excellence (SINAPSE) Collaboration, Edinburgh, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| | - Ian J Deary
- Department of Psychology, University of Edinburgh, Edinburgh, UK.,Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, UK
| | - Joanna M Wardlaw
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, UK.,Scottish Imaging Network, a Platform for Scientific Excellence (SINAPSE) Collaboration, Edinburgh, UK.,UK Dementia Research Institute at the University of Edinburgh, Chancellor's Building, Edinburgh, UK
| |
Collapse
|
43
|
Cirla A, Drigo M, Ballerini L, Trucco E, Barsotti G. VAMPIRE
®
fundus image analysis algorithms: Validation and diagnostic relevance in hypertensive cats. Vet Ophthalmol 2019; 22:819-827. [DOI: 10.1111/vop.12657] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 01/29/2019] [Accepted: 01/31/2019] [Indexed: 10/27/2022]
Affiliation(s)
- Alessandro Cirla
- Department of Ophthalmology San Marco Veterinary Clinic and Laboratory Veggiano Italy
- Department of Veterinary Science University of Pisa Pisa Italy
| | - Michele Drigo
- Department of Animal Medicine, Production and Health University of Padova Legnaro Italy
| | | | | | | |
Collapse
|
44
|
Pead E, Megaw R, Cameron J, Fleming A, Dhillon B, Trucco E, MacGillivray T. Automated detection of age-related macular degeneration in color fundus photography: a systematic review. Surv Ophthalmol 2019; 64:498-511. [PMID: 30772363 PMCID: PMC6598673 DOI: 10.1016/j.survophthal.2019.02.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 01/31/2019] [Accepted: 02/04/2019] [Indexed: 12/13/2022]
Abstract
The rising prevalence of age-related eye diseases, particularly age-related macular degeneration, places an ever-increasing burden on health care providers. As new treatments emerge, it is necessary to develop methods for reliably assessing patients' disease status and stratifying risk of progression. The presence of drusen in the retina represents a key early feature in which size, number, and morphology are thought to correlate significantly with the risk of progression to sight-threatening age-related macular degeneration. Manual labeling of drusen on color fundus photographs by a human is labor intensive and is where automatic computerized detection would appreciably aid patient care. We review and evaluate current artificial intelligence methods and developments for the automated detection of drusen in the context of age-related macular degeneration.
Collapse
Affiliation(s)
- Emma Pead
- VAMPIRE Project, Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, Scotland.
| | - Roly Megaw
- Princess Alexandra Eye Pavilion, Edinburgh, Scotland
| | - James Cameron
- MRC Human Genetics Unit, The University of Edinburgh, Edinburgh, Scotland
| | - Alan Fleming
- Optos plc, Queensferry House, Carnegie Campus, Dunfermline
| | | | - Emanuele Trucco
- VAMPIRE Project, Computing (School of Science and Engineering), University of Dundee, UK
| | - Thomas MacGillivray
- VAMPIRE Project, Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, Scotland
| |
Collapse
|
45
|
Automated geographic atrophy segmentation for SD-OCT images based on two-stage learning model. Comput Biol Med 2019; 105:102-111. [DOI: 10.1016/j.compbiomed.2018.12.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 12/27/2018] [Accepted: 12/27/2018] [Indexed: 01/19/2023]
|
46
|
Corliss BA, Mathews C, Doty R, Rohde G, Peirce SM. Methods to label, image, and analyze the complex structural architectures of microvascular networks. Microcirculation 2019; 26:e12520. [PMID: 30548558 PMCID: PMC6561846 DOI: 10.1111/micc.12520] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 10/31/2018] [Accepted: 11/26/2018] [Indexed: 12/30/2022]
Abstract
Microvascular networks play key roles in oxygen transport and nutrient delivery to meet the varied and dynamic metabolic needs of different tissues throughout the body, and their spatial architectures of interconnected blood vessel segments are highly complex. Moreover, functional adaptations of the microcirculation enabled by structural adaptations in microvascular network architecture are required for development, wound healing, and often invoked in disease conditions, including the top eight causes of death in the Unites States. Effective characterization of microvascular network architectures is not only limited by the available techniques to visualize microvessels but also reliant on the available quantitative metrics that accurately delineate between spatial patterns in altered networks. In this review, we survey models used for studying the microvasculature, methods to label and image microvessels, and the metrics and software packages used to quantify microvascular networks. These programs have provided researchers with invaluable tools, yet we estimate that they have collectively attained low adoption rates, possibly due to limitations with basic validation, segmentation performance, and nonstandard sets of quantification metrics. To address these existing constraints, we discuss opportunities to improve effectiveness, rigor, and reproducibility of microvascular network quantification to better serve the current and future needs of microvascular research.
Collapse
Affiliation(s)
- Bruce A Corliss
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia
| | - Corbin Mathews
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia
| | - Richard Doty
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia
| | - Gustavo Rohde
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia
| | - Shayn M Peirce
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia
| |
Collapse
|
47
|
Trucco E, McNeil A, McGrory S, Ballerini L, Mookiah MRK, Hogg S, Doney A, MacGillivray T. Validation. COMPUTATIONAL RETINAL IMAGE ANALYSIS 2019:157-170. [DOI: 10.1016/b978-0-08-102816-2.00009-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
|
48
|
Owen CG, Rudnicka AR, Welikala RA, Fraz MM, Barman SA, Luben R, Hayat SA, Khaw KT, Strachan DP, Whincup PH, Foster PJ. Retinal Vasculometry Associations with Cardiometabolic Risk Factors in the European Prospective Investigation of Cancer-Norfolk Study. Ophthalmology 2019; 126:96-106. [PMID: 30075201 PMCID: PMC6302796 DOI: 10.1016/j.ophtha.2018.07.022] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 07/16/2018] [Accepted: 07/27/2018] [Indexed: 02/02/2023] Open
Abstract
PURPOSE To examine associations between retinal vessel morphometry and cardiometabolic risk factors in older British men and women. DESIGN Retinal imaging examination as part of the European Prospective Investigation into Cancer-Norfolk Eye Study. PARTICIPANTS Retinal imaging and clinical assessments were carried out in 7411 participants. Retinal images were analyzed using a fully automated validated computerized system that provides novel measures of vessel morphometry. METHODS Associations between cardiometabolic risk factors, chronic disease, and retinal markers were analyzed using multilevel linear regression, adjusted for age, gender, and within-person clustering, to provide percentage differences in tortuosity and absolute differences in width. MAIN OUTCOMES MEASURES Retinal arteriolar and venular tortuosity and width. RESULTS In all, 279 802 arterioles and 285 791 venules from 5947 participants (mean age, 67.6 years; standard deviation [SD], 7.6 years; 57% female) were analyzed. Increased venular tortuosity was associated with higher body mass index (BMI; 2.5%; 95% confidence interval [CI], 1.7%-3.3% per 5 kg/m2), hemoglobin A1c (HbA1c) level (2.2%; 95% CI, 1.0%-3.5% per 1%), and prevalent type 2 diabetes (6.5%; 95% CI, 2.8%-10.4%); wider venules were associated with older age (2.6 μm; 95% CI, 2.2-2.9 μm per decade), higher triglyceride levels (0.6 μm; 95% CI, 0.3-0.9 μm per 1 mmol/l), BMI (0.7 μm; 95% CI, 0.4-1.0 per 5 kg/m2), HbA1c level (0.4 μm; 95% CI, -0.1 to 0.9 per 1%), and being a current smoker (3.0 μm; 95% CI, 1.7-4.3 μm); smoking also was associated with wider arterioles (2.1 μm; 95% CI, 1.3-2.9 μm). Thinner venules were associated with high-density lipoprotein (HDL) (1.4 μm; 95% CI, 0.7-2.2 per 1 mmol/l). Arteriolar tortuosity increased with age (5.4%; 95% CI, 3.8%-7.1% per decade), higher systolic blood pressure (1.2%; 95% CI, 0.5%-1.9% per 10 mmHg), in females (3.8%; 95% CI, 1.4%-6.4%), and in those with prevalent stroke (8.3%; 95% CI, -0.6% to 18%); no association was observed with prevalent myocardial infarction. Narrower arterioles were associated with age (0.8 μm; 95% CI, 0.6-1.0 μm per decade), higher systolic blood pressure (0.5 μm; 95% CI, 0.4-0.6 μm per 10 mmHg), total cholesterol level (0.2 μm; 95% CI, 0.0-0.3 μm per 1 mmol/l), and HDL (1.2 μm; 95% CI, 0.7-1.6 μm per 1 mmol/l). CONCLUSIONS Metabolic risk factors showed a graded association with both tortuosity and width of retinal venules, even among people without clinical diabetes, whereas atherosclerotic risk factors correlated more closely with arteriolar width, even excluding those with hypertension and cardiovascular disease. These noninvasive microvasculature measures should be evaluated further as predictors of future cardiometabolic disease.
Collapse
Affiliation(s)
- Christopher G Owen
- Population Health Research Institute, St. George's, University of London, London, United Kingdom.
| | - Alicja R Rudnicka
- Population Health Research Institute, St. George's, University of London, London, United Kingdom
| | - Roshan A Welikala
- Faculty of Science, Engineering and Computing, Kingston University, Kingston-upon-Thames, Surrey, United Kingdom
| | - M Moazam Fraz
- School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan
| | - Sarah A Barman
- Faculty of Science, Engineering and Computing, Kingston University, Kingston-upon-Thames, Surrey, United Kingdom
| | - Robert Luben
- Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, Cambridge, United Kingdom
| | - Shabina A Hayat
- Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, Cambridge, United Kingdom
| | - Kay-Tee Khaw
- Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, Cambridge, United Kingdom
| | - David P Strachan
- Population Health Research Institute, St. George's, University of London, London, United Kingdom
| | - Peter H Whincup
- Population Health Research Institute, St. George's, University of London, London, United Kingdom
| | - Paul J Foster
- Integrative Epidemiology Research Group, UCL Institute of Ophthalmology, London, United Kingdom; NIHR Biomedical Research Centre at Moorfields Eye Hospital and UCL Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|
49
|
Porwal P, Pachade S, Kokare M, Giancardo L, Mériaudeau F. Retinal image analysis for disease screening through local tetra patterns. Comput Biol Med 2018; 102:200-210. [DOI: 10.1016/j.compbiomed.2018.09.028] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 09/11/2018] [Accepted: 09/26/2018] [Indexed: 11/27/2022]
|
50
|
Cheng J, Li Z, Gu Z, Fu H, Wong DWK, Liu J. Structure-Preserving Guided Retinal Image Filtering and Its Application for Optic Disk Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2536-2546. [PMID: 29994522 DOI: 10.1109/tmi.2018.2838550] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Retinal fundus photographs have been used in the diagnosis of many ocular diseases such as glaucoma, pathological myopia, age-related macular degeneration, and diabetic retinopathy. With the development of computer science, computer aided diagnosis has been developed to process and analyze the retinal images automatically. One of the challenges in the analysis is that the quality of the retinal image is often degraded. For example, a cataract in human lens will attenuate the retinal image, just as a cloudy camera lens which reduces the quality of a photograph. It often obscures the details in the retinal images and posts challenges in retinal image processing and analyzing tasks. In this paper, we approximate the degradation of the retinal images as a combination of human-lens attenuation and scattering. A novel structure-preserving guided retinal image filtering (SGRIF) is then proposed to restore images based on the attenuation and scattering model. The proposed SGRIF consists of a step of global structure transferring and a step of global edge-preserving smoothing. Our results show that the proposed SGRIF method is able to improve the contrast of retinal images, measured by histogram flatness measure, histogram spread, and variability of local luminosity. In addition, we further explored the benefits of SGRIF for subsequent retinal image processing and analyzing tasks. In the two applications of deep learning-based optic cup segmentation and sparse learning-based cup-to-disk ratio (CDR) computation, our results show that we are able to achieve more accurate optic cup segmentation and CDR measurements from images processed by SGRIF.
Collapse
|