1
|
Syed MG, Trucco E, Mookiah MRK, Lang CC, McCrimmon RJ, Palmer CNA, Pearson ER, Doney ASF, Mordi IR. Deep-learning prediction of cardiovascular outcomes from routine retinal images in individuals with type 2 diabetes. Cardiovasc Diabetol 2025; 24:3. [PMID: 39748380 PMCID: PMC11697721 DOI: 10.1186/s12933-024-02564-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 12/24/2024] [Indexed: 01/04/2025] Open
Abstract
BACKGROUND Prior studies have demonstrated an association between retinal vascular features and cardiovascular disease (CVD), however most studies have only evaluated a few simple parameters at a time. Our aim was to determine whether a deep-learning artificial intelligence (AI) model could be used to predict CVD outcomes from routinely obtained diabetic retinal screening photographs and to compare its performance to a traditional clinical CVD risk score. METHODS We included 6127 individuals with type 2 diabetes without myocardial infarction or stroke prior to study entry. The cohort was divided into training (70%), validation (10%) and testing (20%) cohorts. Clinical 10-year CVD risk was calculated using the pooled cohort equation (PCE) risk score. A polygenic risk score (PRS) for coronary heart disease was also obtained. Retinal images were analysed using an EfficientNet-B2 network to predict 10-year CVD risk. The primary outcome was time to first major adverse CV event (MACE) including CV death, myocardial infarction or stroke. RESULTS 1241 individuals were included in the test cohort (mean PCE 10-year CVD risk 35%). There was a strong correlation between retinal predicted CVD risk and the PCE risk score (r = 0.66) but not the polygenic risk score (r = 0.05). There were 288 MACE events. Higher retina-predicted risk was significantly associated with increased 10-year risk of MACE (HR 1.05 per 1% increase; 95% CI 1.04-1.06, p < 0.001) and remained so after adjustment for the PCE and polygenic risk score (HR 1.03; 95% CI 1.02-1.04, p < 0.001). The retinal risk score had similar performance to the PCE (both AUC 0.697) and when combined with the PCE and polygenic risk score had significantly improved performance compared to the PCE alone (AUC 0.728). An increase in retinal-predicted risk within 3 years was associated with subsequent increased MACE likelihood. CONCLUSIONS A deep-learning AI model could accurately predict MACE from routine retinal screening photographs with a comparable performance to traditional clinical risk assessment in a diabetic cohort. Combining the AI-derived retinal risk prediction with a coronary heart disease polygenic risk score improved risk prediction. AI retinal assessment might allow a one-stop CVD risk assessment at routine retinal screening.
Collapse
Affiliation(s)
- Mohammad Ghouse Syed
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, USA
| | - Emanuele Trucco
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, USA
| | - Muthu R K Mookiah
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, USA
| | - Chim C Lang
- Division of Cardiovascular Research, School of Medicine, University of Dundee, Dundee, DD1 9SY, UK
- Tuanku Muhriz Royal Chair, National University of Malaysia, Bangi, Malaysia
| | - Rory J McCrimmon
- Division of Systems Medicine, School of Medicine, University of Dundee, Dundee, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, UK
| | - Ewan R Pearson
- Division of Population Health and Genomics, School of Medicine, University of Dundee, Dundee, UK
| | - Alex S F Doney
- Division of Cardiovascular Research, School of Medicine, University of Dundee, Dundee, DD1 9SY, UK
| | - Ify R Mordi
- Division of Cardiovascular Research, School of Medicine, University of Dundee, Dundee, DD1 9SY, UK.
| |
Collapse
|
2
|
Fatima N, Afrakhteh S, Iacca G, Demi L. Automatic Segmentation of 2-D Echocardiography Ultrasound Images by Means of Generative Adversarial Network. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:1552-1564. [PMID: 38656835 DOI: 10.1109/tuffc.2024.3393026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Automated cardiac segmentation from 2-D echocardiographic images is a crucial step toward improving clinical diagnosis. Anatomical heterogeneity and inherent noise, however, present technical challenges and lower segmentation accuracy. The objective of this study is to propose a method for the automatic segmentation of the ventricular endocardium, the myocardium, and the left atrium (LA), in order to accurately determine clinical indices. Specifically, we suggest using the recently introduced pixel-to-pixel generative adversarial network (Pix2Pix GAN) model for accurate segmentation. To accomplish this, we integrate the backbone PatchGAN model for the discriminator and the UNET for the generator, for building the Pix2Pix GAN. The resulting model produces precisely segmented images because of UNET's capability for precise segmentation and PatchGAN's capability for fine-grained discrimination. For the experimental validation, we use the cardiac acquisitions for multistructure ultrasound segmentation (CAMUS) dataset, which consists of echocardiographic images from 500 patients in two-chamber (2CH) and four-chamber (4CH) views at the end-diastolic (ED) and end-systolic (ES) phases. Similar to state-of-the-art studies on the same dataset, we followed the same train-test splits. Our results demonstrate that the proposed generative adversarial network (GAN)-based technique improves segmentation performance for clinical and geometrical parameters compared with the state-of-the-art methods. More precisely, throughout the ED and ES phases, the mean Dice values for the left ventricular endocardium ( ) reached 0.961 and 0.930 for 2CH, and 0.959 and 0.950 for 4CH, respectively. Furthermore, the average ejection fraction (EF) correlation and mean absolute error (MAE) obtained were 0.95 and 3.2 mL for 2CH, and 0.98 and 2.1 mL for 4CH, outperforming the state-of-the-art results.
Collapse
|
3
|
Nabrdalik K, Irlik K, Meng Y, Kwiendacz H, Piaśnik J, Hendel M, Ignacy P, Kulpa J, Kegler K, Herba M, Boczek S, Hashim EB, Gao Z, Gumprecht J, Zheng Y, Lip GYH, Alam U. Artificial intelligence-based classification of cardiac autonomic neuropathy from retinal fundus images in patients with diabetes: The Silesia Diabetes Heart Study. Cardiovasc Diabetol 2024; 23:296. [PMID: 39127709 PMCID: PMC11316981 DOI: 10.1186/s12933-024-02367-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 07/17/2024] [Indexed: 08/12/2024] Open
Abstract
BACKGROUND Cardiac autonomic neuropathy (CAN) in diabetes mellitus (DM) is independently associated with cardiovascular (CV) events and CV death. Diagnosis of this complication of DM is time-consuming and not routinely performed in the clinical practice, in contrast to fundus retinal imaging which is accessible and routinely performed. Whether artificial intelligence (AI) utilizing retinal images collected through diabetic eye screening can provide an efficient diagnostic method for CAN is unknown. METHODS This was a single center, observational study in a cohort of patients with DM as a part of the Cardiovascular Disease in Patients with Diabetes: The Silesia Diabetes-Heart Project (NCT05626413). To diagnose CAN, we used standard CV autonomic reflex tests. In this analysis we implemented AI-based deep learning techniques with non-mydriatic 5-field color fundus imaging to identify patients with CAN. Two experiments have been developed utilizing Multiple Instance Learning and primarily ResNet 18 as the backbone network. Models underwent training and validation prior to testing on an unseen image set. RESULTS In an analysis of 2275 retinal images from 229 patients, the ResNet 18 backbone model demonstrated robust diagnostic capabilities in the binary classification of CAN, correctly identifying 93% of CAN cases and 89% of non-CAN cases within the test set. The model achieved an area under the receiver operating characteristic curve (AUCROC) of 0.87 (95% CI 0.74-0.97). For distinguishing between definite or severe stages of CAN (dsCAN), the ResNet 18 model accurately classified 78% of dsCAN cases and 93% of cases without dsCAN, with an AUCROC of 0.94 (95% CI 0.86-1.00). An alternate backbone model, ResWide 50, showed enhanced sensitivity at 89% for dsCAN, but with a marginally lower AUCROC of 0.91 (95% CI 0.73-1.00). CONCLUSIONS AI-based algorithms utilising retinal images can differentiate with high accuracy patients with CAN. AI analysis of fundus images to detect CAN may be implemented in routine clinical practice to identify patients at the highest CV risk. TRIAL REGISTRATION This is a part of the Silesia Diabetes-Heart Project (Clinical-Trials.gov Identifier: NCT05626413).
Collapse
Affiliation(s)
- Katarzyna Nabrdalik
- Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland.
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK.
| | - Krzysztof Irlik
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
- Doctoral School, Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Yanda Meng
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Hanna Kwiendacz
- Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Julia Piaśnik
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Mirela Hendel
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Paweł Ignacy
- Doctoral School, Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Justyna Kulpa
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Kamil Kegler
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Mikołaj Herba
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Sylwia Boczek
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Effendy Bin Hashim
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
- Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Zhuangzhi Gao
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Janusz Gumprecht
- Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Yalin Zheng
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
- Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Gregory Y H Lip
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Danish Center for Health Services Research, Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Uazman Alam
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Diabetes & Endocrinology Research and Pain Research Institute, Institute of Life Course and Medical Sciences, University of Liverpool and Liverpool University Hospital NHS Foundation Trust, Liverpool, UK
| |
Collapse
|
4
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
5
|
Richardson A, Kundu A, Henao R, Lee T, Scott BL, Grewal DS, Fekrat S. Multimodal Retinal Imaging Classification for Parkinson's Disease Using a Convolutional Neural Network. Transl Vis Sci Technol 2024; 13:23. [PMID: 39136960 PMCID: PMC11323992 DOI: 10.1167/tvst.13.8.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 06/23/2024] [Indexed: 08/16/2024] Open
Abstract
Purpose Changes in retinal structure and microvasculature are connected to parallel changes in the brain. Two recent studies described machine learning algorithms trained on retinal images and quantitative data that identified Alzheimer's dementia and mild cognitive impairment with high accuracy. Prior studies also demonstrated retinal differences in individuals with PD. Herein, we developed a convolutional neural network (CNN) to classify multimodal retinal imaging from either a Parkinson's disease (PD) or control group. Methods We trained a CNN to receive retinal image inputs of optical coherence tomography (OCT) ganglion cell-inner plexiform layer (GC-IPL) thickness color maps, OCT angiography 6 × 6-mm en face macular images of the superficial capillary plexus, and ultra-widefield (UWF) fundus color and autofluorescence photographs to classify the retinal imaging as PD or control. The model consists of a shared pretrained VGG19 feature extractor and image-specific feature transformations which converge to a single output. Model results were assessed using receiver operating characteristic (ROC) curves and bootstrapped 95% confidence intervals for area under the ROC curve (AUC) values. Results In total, 371 eyes of 249 control subjects and 75 eyes of 52 PD subjects were used for training, validation, and testing. Our best CNN variant achieved an AUC of 0.918. UWF color photographs were the most effective imaging input, and GC-IPL thickness maps were the least contributory. Conclusions Using retinal images, our pilot CNN was able to identify individuals with PD and serves as a proof of concept to spur the collection of larger imaging datasets needed for clinical-grade algorithms. Translational Relevance Developing machine learning models for automated detection of Parkinson's disease from retinal imaging could lead to earlier and more widespread diagnoses.
Collapse
Affiliation(s)
- Alexander Richardson
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
- Department of Computer Science, Duke University, Durham, NC, USA
| | - Anita Kundu
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
| | - Ricardo Henao
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
- Department of Computer Science, Duke University, Durham, NC, USA
| | - Terry Lee
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
| | - Burton L. Scott
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| | - Dilraj S. Grewal
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
| | - Sharon Fekrat
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
6
|
Wong YL, Yu M, Chong C, Yang D, Xu D, Lee ML, Hsu W, Wong TY, Cheng C, Cheung CY. Association between deep learning measured retinal vessel calibre and incident myocardial infarction in a retrospective cohort from the UK Biobank. BMJ Open 2024; 14:e079311. [PMID: 38514140 PMCID: PMC10961540 DOI: 10.1136/bmjopen-2023-079311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 02/19/2024] [Indexed: 03/23/2024] Open
Abstract
BACKGROUND Cardiovascular disease is a leading cause of global death. Prospective population-based studies have found that changes in retinal microvasculature are associated with the development of coronary artery disease. Recently, artificial intelligence deep learning (DL) algorithms have been developed for the fully automated assessment of retinal vessel calibres. METHODS In this study, we validate the association between retinal vessel calibres measured by a DL system (Singapore I Vessel Assessment) and incident myocardial infarction (MI) and assess its incremental performance in discriminating patients with and without MI when added to risk prediction models, using a large UK Biobank cohort. RESULTS Retinal arteriolar narrowing was significantly associated with incident MI in both the age, gender and fellow calibre-adjusted (HR=1.67 (95% CI: 1.19 to 2.36)) and multivariable models (HR=1.64 (95% CI: 1.16 to 2.32)) adjusted for age, gender and other cardiovascular risk factors such as blood pressure, diabetes mellitus (DM) and cholesterol status. The area under the receiver operating characteristic curve increased from 0.738 to 0.745 (p=0.018) in the age-gender-adjusted model and from 0.782 to 0.787 (p=0.010) in the multivariable model. The continuous net reclassification improvements (NRIs) were significant in the age and gender-adjusted (NRI=21.56 (95% CI: 3.33 to 33.42)) and the multivariable models (NRI=18.35 (95% CI: 6.27 to 32.61)). In the subgroup analysis, similar associations between retinal arteriolar narrowing and incident MI were observed, particularly for men (HR=1.62 (95% CI: 1.07 to 2.46)), non-smokers (HR=1.65 (95% CI: 1.13 to 2.42)), patients without DM (HR=1.73 (95% CI: 1.19 to 2.51)) and hypertensive patients (HR=1.95 (95% CI: 1.30 to 2.93)) in the multivariable models. CONCLUSION Our results support DL-based retinal vessel measurements as markers of incident MI in a predominantly Caucasian population.
Collapse
Affiliation(s)
- Yiu Lun Wong
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Marco Yu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Crystal Chong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Dejiang Xu
- School of Computing, National University of Singapore, Singapore
| | - Mong Li Lee
- School of Computing, National University of Singapore, Singapore
| | - Wynne Hsu
- School of Computing, National University of Singapore, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Chingyu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
7
|
Gong AJ, Fu W, Li H, Guo N, Pan T. A Siamese ResNeXt network for predicting carotid intimal thickness of patients with T2DM from fundus images. Front Endocrinol (Lausanne) 2024; 15:1364519. [PMID: 38549767 PMCID: PMC10973133 DOI: 10.3389/fendo.2024.1364519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 02/21/2024] [Indexed: 04/02/2024] Open
Abstract
Objective To develop and validate an artificial intelligence diagnostic model based on fundus images for predicting Carotid Intima-Media Thickness (CIMT) in individuals with Type 2 Diabetes Mellitus (T2DM). Methods In total, 1236 patients with T2DM who had both retinal fundus images and CIMT ultrasound records within a single hospital stay were enrolled. Data were divided into normal and thickened groups and sent to eight deep learning models: convolutional neural networks of the eight models were all based on ResNet or ResNeXt. Their encoder and decoder modes are different, including the standard mode, the Parallel learning mode, and the Siamese mode. Except for the six unimodal networks, two multimodal networks based on ResNeXt under the Parallel learning mode or the Siamese mode were embedded with ages. Performance of eight models were compared via the confusion matrix, precision, recall, specificity, F1 value, and ROC curve, and recall was regarded as the main indicator. Besides, Grad-CAM was used to visualize the decisions made by Siamese ResNeXt network, which is the best performance. Results Performance of various models demonstrated the following points: 1) the RexNeXt showed a notable improvement over the ResNet; 2) the structural Siamese networks, which extracted features parallelly and independently, exhibited slight performance enhancements compared to the traditional networks. Notably, the Siamese networks resulted in significant improvements; 3) the performance of classification declined if the age factor was embedded in the network. Taken together, the Siamese ResNeXt unimodal model performed best for its superior efficacy and robustness. This model achieved a recall rate of 88.0% and an AUC value of 90.88% in the validation subset. Additionally, heatmaps calculated by the Grad-CAM algorithm presented concentrated and orderly mappings around the optic disc vascular area in normal CIMT groups and dispersed, irregular patterns in thickened CIMT groups. Conclusion We provided a Siamese ResNeXt neural network for predicting the carotid intimal thickness of patients with T2DM from fundus images and confirmed the correlation between fundus microvascular lesions and CIMT.
Collapse
Affiliation(s)
- AJuan Gong
- Department of Endocrinology, The Second Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Wanjin Fu
- Department of Clinical Pharmacology, The Second Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Heng Li
- The Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Na Guo
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| | - Tianrong Pan
- Department of Endocrinology, The Second Affiliated Hospital of Anhui Medical University, Hefei, China
| |
Collapse
|
8
|
Gu C, Wang Y, Jiang Y, Xu F, Wang S, Liu R, Yuan W, Abudureyimu N, Wang Y, Lu Y, Li X, Wu T, Dong L, Chen Y, Wang B, Zhang Y, Wei WB, Qiu Q, Zheng Z, Liu D, Chen J. Application of artificial intelligence system for screening multiple fundus diseases in Chinese primary healthcare settings: a real-world, multicentre and cross-sectional study of 4795 cases. Br J Ophthalmol 2024; 108:424-431. [PMID: 36878715 PMCID: PMC10894824 DOI: 10.1136/bjo-2022-322940] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 02/19/2023] [Indexed: 03/08/2023]
Abstract
BACKGROUND/AIMS This study evaluates the performance of the Airdoc retinal artificial intelligence system (ARAS) for detecting multiple fundus diseases in real-world scenarios in primary healthcare settings and investigates the fundus disease spectrum based on ARAS. METHODS This real-world, multicentre, cross-sectional study was conducted in Shanghai and Xinjiang, China. Six primary healthcare settings were included in this study. Colour fundus photographs were taken and graded by ARAS and retinal specialists. The performance of ARAS is described by its accuracy, sensitivity, specificity and positive and negative predictive values. The spectrum of fundus diseases in primary healthcare settings has also been investigated. RESULTS A total of 4795 participants were included. The median age was 57.0 (IQR 39.0-66.0) years, and 3175 (66.2%) participants were female. The accuracy, specificity and negative predictive value of ARAS for detecting normal fundus and 14 retinal abnormalities were high, whereas the sensitivity and positive predictive value varied in detecting different abnormalities. The proportion of retinal drusen, pathological myopia and glaucomatous optic neuropathy was significantly higher in Shanghai than in Xinjiang. Moreover, the percentages of referable diabetic retinopathy, retinal vein occlusion and macular oedema in middle-aged and elderly people in Xinjiang were significantly higher than in Shanghai. CONCLUSION This study demonstrated the dependability of ARAS for detecting multiple retinal diseases in primary healthcare settings. Implementing the AI-assisted fundus disease screening system in primary healthcare settings might be beneficial in reducing regional disparities in medical resources. However, the ARAS algorithm must be improved to achieve better performance. TRIAL REGISTRATION NUMBER NCT04592068.
Collapse
Affiliation(s)
- Chufeng Gu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yujie Wang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yan Jiang
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Feiping Xu
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Shasha Wang
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Rui Liu
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Wen Yuan
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Nurbiyimu Abudureyimu
- Department of Ophthalmology, Bachu County Traditional Chinese Medicine Hospital of Kashgar, Xinjiang, China
| | - Ying Wang
- Department of Ophthalmology, Bachu Country People's Hospital of Kashgar, Xinjiang, China
| | - Yulan Lu
- Department of Ophthalmology, Linfen Community Health Service Center of Jing'an District, Shanghai, China
| | - Xiaolong Li
- Department of Ophthalmology, Pengpu New Village Community Health Service Center of Jing'an District, Shanghai, China
| | - Tao Wu
- Department of Ophthalmology, Pengpu Town Community Health Service Center of Jing'an District, Shanghai, China
| | - Li Dong
- Beijing Tongren Eye Center, Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Capital Medical University, Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | | | - Wen Bin Wei
- Beijing Tongren Eye Center, Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Capital Medical University, Beijing, China
| | - Qinghua Qiu
- Department of Ophthalmology, Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhi Zheng
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Deng Liu
- Bachu Country People's Hospital of Kashgar, Xinjiang, China
- Shanghai No. 3 Rehabilitation Hospital, Shanghai, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| |
Collapse
|
9
|
Hu W, Yii FSL, Chen R, Zhang X, Shang X, Kiburg K, Woods E, Vingrys A, Zhang L, Zhu Z, He M. A Systematic Review and Meta-Analysis of Applying Deep Learning in the Prediction of the Risk of Cardiovascular Diseases From Retinal Images. Transl Vis Sci Technol 2023; 12:14. [PMID: 37440249 PMCID: PMC10353749 DOI: 10.1167/tvst.12.7.14] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 06/08/2023] [Indexed: 07/14/2023] Open
Abstract
Purpose The purpose of this study was to perform a systematic review and meta-analysis to synthesize evidence from studies using deep learning (DL) to predict cardiovascular disease (CVD) risk from retinal images. Methods A systematic literature search was performed in MEDLINE, Scopus, and Web of Science up to June 2022. We extracted data pertaining to predicted outcomes, model development, and validation and model performance metrics. Included studies were graded using the Quality Assessment of Diagnostic Accuracies Studies 2 tool. Model performance was pooled across eligible studies using a random-effects meta-analysis model. Results A total of 26 studies were included in the analysis. There were 42 CVD risk-related outcomes predicted from retinal images were identified, including 33 CVD risk factors, 4 cardiac imaging biomarkers, 2 CVD risk scores, the presence of CVD, and incident CVD. Three studies that aimed to predict the development of future CVD events reported an area under the receiver operating curve (AUROC) between 0.68 and 0.81. Models that used retinal images as input data had a pooled mean absolute error of 3.19 years (95% confidence interval [CI] = 2.95-3.43) for age prediction; a pooled AUROC of 0.96 (95% CI = 0.95-0.97) for gender classification; a pooled AUROC of 0.80 (95% CI = 0.73-0.86) for diabetes detection; and a pooled AUROC of 0.86 (95% CI = 0.81-0.92) for the detection of chronic kidney disease. We observed a high level of heterogeneity and variation in study designs. Conclusions Although DL models appear to have reasonably good performance when it comes to predicting CVD risk, further work is necessary to evaluate the real-world applicability and predictive accuracy. Translational Relevance DL-based CVD risk assessment from retinal images holds great promise to be translated to clinical practice as a novel approach for CVD risk assessment, given its simple, quick, and noninvasive nature.
Collapse
Affiliation(s)
- Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Fabian S. L. Yii
- Centre for Clinical Brain Sciences, Edinburgh Medical School, University of Edinburgh, Edinburgh, UK
- Curle Ophthalmology Laboratory, Institute for Regeneration and Repair, University of Edinburgh, Edinburgh, UK
| | - Ruiye Chen
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Xinyu Zhang
- Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xianwen Shang
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Katerina Kiburg
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Ekaterina Woods
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Algis Vingrys
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, Australia
| | - Lei Zhang
- Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Mingguang He
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| |
Collapse
|
10
|
Al-Halafi AM. Applications of artificial intelligence-assisted retinal imaging in systemic diseases: A literature review. Saudi J Ophthalmol 2023; 37:185-192. [PMID: 38074306 PMCID: PMC10701145 DOI: 10.4103/sjopt.sjopt_153_23] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 12/18/2024] Open
Abstract
The retina is a vulnerable structure that is frequently affected by different systemic conditions. The main mechanisms of systemic retinal damage are either primary insult of neurons of the retina, alterations of the local vasculature, or both. This vulnerability makes the retina an important window that reflects the severity of the preexisting systemic disorders. Therefore, current imaging techniques aim to identify early retinal changes relevant to systemic anomalies to establish anticipated diagnosis and start adequate management. Artificial intelligence (AI) has become among the highly trending technologies in the field of medicine. Its spread continues to extend to different specialties including ophthalmology. Many studies have shown the potential of this technique in assisting the screening of retinal anomalies in the context of systemic disorders. In this review, we performed extensive literature search to identify the most important studies that support the effectiveness of AI/deep learning use for diagnosing systemic disorders through retinal imaging. The utility of these technologies in the field of retina-based diagnosis of systemic conditions is highlighted.
Collapse
Affiliation(s)
- Ali M. Al-Halafi
- Department of Ophthalmology, Security Forces Hospital, Riyadh, Saudi Arabia
| |
Collapse
|
11
|
Abstract
PURPOSE OF REVIEW Assistive (nonautonomous) artificial intelligence (AI) models designed to support (rather than function independently of) clinicians have received increasing attention in medicine. This review aims to highlight several recent developments in these models over the past year and their ophthalmic implications. RECENT FINDINGS Artificial intelligence models with a diverse range of applications in ophthalmology have been reported in the literature over the past year. Many of these systems have reported high performance in detection, classification, prognostication, and/or monitoring of retinal, glaucomatous, anterior segment, and other ocular pathologies. SUMMARY Over the past year, developments in AI have been made that have implications affecting ophthalmic surgical training and refractive outcomes after cataract surgery, therapeutic monitoring of disease, disease classification, and prognostication. Many of these recently developed models have obtained encouraging results and have the potential to serve as powerful clinical decision-making tools pending further external validation and evaluation of their generalizability.
Collapse
Affiliation(s)
- Donald C Hubbard
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Parker Cox
- Spencer Fox Eccles School of Medicine, University of Utah, Salt Lake City, Utah, USA
| | - Travis K Redd
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| |
Collapse
|
12
|
Mellor J, Jiang W, Fleming A, McGurnaghan SJ, Blackbourn L, Styles C, Storkey AJ, McKeigue PM, Colhoun HM. Can deep learning on retinal images augment known risk factors for cardiovascular disease prediction in diabetes? A prospective cohort study from the national screening programme in Scotland. Int J Med Inform 2023; 175:105072. [PMID: 37167840 DOI: 10.1016/j.ijmedinf.2023.105072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 02/20/2023] [Accepted: 04/12/2023] [Indexed: 05/13/2023]
Abstract
AIMS This study's objective was to evaluate whether deep learning (DL) on retinal photographs from a diabetic retinopathy screening programme improve prediction of incident cardiovascular disease (CVD). METHODS DL models were trained to jointly predict future CVD risk and CVD risk factors and used to output a DL score. Poisson regression models including clinical risk factors with and without a DL score were fitted to study cohorts with 2,072 and 38,730 incident CVD events in type 1 (T1DM) and type 2 diabetes (T2DM) respectively. RESULTS DL scores were independently associated with incident CVD with adjusted standardised incidence rate ratios of 1.14 (P = 3 × 10-04 95 % CI (1.06, 1.23)) and 1.16 (P = 4 × 10-33 95 % CI (1.13, 1.18)) in T1DM and T2DM cohorts respectively. The differences in predictive performance between models with and without a DL score were statistically significant (differences in test log-likelihood 6.7 and 51.1 natural log units) but the increments in C-statistics from 0.820 to 0.822 and from 0.709 to 0.711 for T1DM and T2DM respectively, were small. CONCLUSIONS These results show that in people with diabetes, retinal photographs contain information on future CVD risk. However for this to contribute appreciably to clinical prediction of CVD further approaches, including exploitation of serial images, need to be evaluated.
Collapse
Affiliation(s)
- Joseph Mellor
- The Usher Institute, University of Edinburgh, Edinburgh, UK.
| | - Wenhua Jiang
- The Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Alan Fleming
- The Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK
| | - Stuart J McGurnaghan
- The Usher Institute, University of Edinburgh, Edinburgh, UK; The Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK
| | - Luke Blackbourn
- The Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK
| | | | - Amos J Storkey
- School of Informatics, University of Edinburgh, Edinburgh, UK
| | | | - Helen M Colhoun
- The Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK; Department of Public Health, NHS Fife, Kirkcaldy, UK
| |
Collapse
|
13
|
Wang S, Ji Y, Bai W, Ji Y, Li J, Yao Y, Zhang Z, Jiang Q, Li K. Advances in artificial intelligence models and algorithms in the field of optometry. Front Cell Dev Biol 2023; 11:1170068. [PMID: 37187617 PMCID: PMC10175695 DOI: 10.3389/fcell.2023.1170068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 04/17/2023] [Indexed: 05/17/2023] Open
Abstract
The rapid development of computer science over the past few decades has led to unprecedented progress in the field of artificial intelligence (AI). Its wide application in ophthalmology, especially image processing and data analysis, is particularly extensive and its performance excellent. In recent years, AI has been increasingly applied in optometry with remarkable results. This review is a summary of the application progress of different AI models and algorithms used in optometry (for problems such as myopia, strabismus, amblyopia, keratoconus, and intraocular lens) and includes a discussion of the limitations and challenges associated with its application in this field.
Collapse
Affiliation(s)
- Suyu Wang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yuke Ji
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Wen Bai
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yun Ji
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Jiajun Li
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yujia Yao
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Ziran Zhang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Qin Jiang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
- *Correspondence: Qin Jiang, ; Keran Li,
| | - Keran Li
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
- *Correspondence: Qin Jiang, ; Keran Li,
| |
Collapse
|
14
|
Hui HYH, Ran AR, Dai JJ, Cheung CY. Deep Reinforcement Learning-Based Retinal Imaging in Alzheimer's Disease: Potential and Perspectives. J Alzheimers Dis 2023; 94:39-50. [PMID: 37212112 DOI: 10.3233/jad-230055] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Alzheimer's disease (AD) remains a global health challenge in the 21st century due to its increasing prevalence as the major cause of dementia. State-of-the-art artificial intelligence (AI)-based tests could potentially improve population-based strategies to detect and manage AD. Current retinal imaging demonstrates immense potential as a non-invasive screening measure for AD, by studying qualitative and quantitative changes in the neuronal and vascular structures of the retina that are often associated with degenerative changes in the brain. On the other hand, the tremendous success of AI, especially deep learning, in recent years has encouraged its incorporation with retinal imaging for predicting systemic diseases. Further development in deep reinforcement learning (DRL), defined as a subfield of machine learning that combines deep learning and reinforcement learning, also prompts the question of how it can work hand in hand with retinal imaging as a viable tool for automated prediction of AD. This review aims to discuss potential applications of DRL in using retinal imaging to study AD, and their synergistic application to unlock other possibilities, such as AD detection and prediction of AD progression. Challenges and future directions, such as the use of inverse DRL in defining reward function, lack of standardization in retinal imaging, and data availability, will also be addressed to bridge gaps for its transition into clinical use.
Collapse
Affiliation(s)
- Herbert Y H Hui
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Jia Jia Dai
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|