51
|
Hua R, Xiong J, Li G, Zhu Y, Ge Z, Ma Y, Fu M, Li C, Wang B, Dong L, Zhao X, Ma Z, Chen J, Gao X, He C, Wang Z, Wei W, Wang F, Gao X, Chen Y, Zeng Q, Xie W. Development and validation of a deep learning algorithm based on fundus photographs for estimating the CAIDE dementia risk score. Age Ageing 2022; 51:6936402. [PMID: 36580391 DOI: 10.1093/ageing/afac282] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/08/2022] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND the Cardiovascular Risk Factors, Aging, and Incidence of Dementia (CAIDE) dementia risk score is a recognised tool for dementia risk stratification. However, its application is limited due to the requirements for multidimensional information and fasting blood draw. Consequently, an effective and non-invasive tool for screening individuals with high dementia risk in large population-based settings is urgently needed. METHODS a deep learning algorithm based on fundus photographs for estimating the CAIDE dementia risk score was developed and internally validated by a medical check-up dataset included 271,864 participants in 19 province-level administrative regions of China, and externally validated based on an independent dataset included 20,690 check-up participants in Beijing. The performance for identifying individuals with high dementia risk (CAIDE dementia risk score ≥ 10 points) was evaluated by area under the receiver operating curve (AUC) with 95% confidence interval (CI). RESULTS the algorithm achieved an AUC of 0.944 (95% CI: 0.939-0.950) in the internal validation group and 0.926 (95% CI: 0.913-0.939) in the external group, respectively. Besides, the estimated CAIDE dementia risk score derived from the algorithm was significantly associated with both comprehensive cognitive function and specific cognitive domains. CONCLUSIONS this algorithm trained via fundus photographs could well identify individuals with high dementia risk in a population setting. Therefore, it has the potential to be utilised as a non-invasive and more expedient method for dementia risk stratification. It might also be adopted in dementia clinical trials, incorporated as inclusion criteria to efficiently select eligible participants.
Collapse
Affiliation(s)
- Rong Hua
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| | | | - Gail Li
- Departments of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, USA.,Division of Gerontology and Geriatric Medicine, University of Washington, Seattle, WA, USA
| | - Yidan Zhu
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| | - Zongyuan Ge
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yanjun Ma
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| | - Meng Fu
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Chenglong Li
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Zhiqiang Ma
- iKang Guobin Healthcare Group Co., Ltd., Beijing, China
| | - Jili Chen
- Shibei Hospital, Jingan District, Shanghai, China
| | - Xinxiao Gao
- Department of Ophthalmology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Chao He
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Zhaohui Wang
- iKang Guobin Healthcare Group Co., Ltd., Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing, China
| | - Fei Wang
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing 100853, China
| | - Xiangyang Gao
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing 100853, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Qiang Zeng
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing 100853, China
| | - Wuxiang Xie
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| |
Collapse
|
52
|
Saleem R, Yuan B, Kurugollu F, Anjum A, Liu L. Explaining deep neural networks: A survey on the global interpretation methods. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
53
|
Cao J, You K, Zhou J, Xu M, Xu P, Wen L, Wang S, Jin K, Lou L, Wang Y, Ye J. A cascade eye diseases screening system with interpretability and expandability in ultra-wide field fundus images: A multicentre diagnostic accuracy study. EClinicalMedicine 2022; 53:101633. [PMID: 36110868 PMCID: PMC9468501 DOI: 10.1016/j.eclinm.2022.101633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/08/2022] [Accepted: 08/08/2022] [Indexed: 12/09/2022] Open
Abstract
BACKGROUND Clinical application of artificial intelligence is limited due to the lack of interpretability and expandability in complex clinical settings. We aimed to develop an eye diseases screening system with improved interpretability and expandability based on a lesion-level dissection and tested the clinical expandability and auxiliary ability of the system. METHODS The four-hierarchical interpretable eye diseases screening system (IEDSS) based on a novel structural pattern named lesion atlas was developed to identify 30 eye diseases and conditions using a total of 32,026 ultra-wide field images collected from the Second Affiliated Hospital of Zhejiang University, School of Medicine (SAHZU), the First Affiliated Hospital of University of Science and Technology of China (FAHUSTC), and the Affiliated People's Hospital of Ningbo University (APHNU) in China between November 1, 2016 to February 28, 2022. The performance of IEDSS was compared with ophthalmologists and classic models trained with image-level labels. We further evaluated IEDSS in two external datasets, and tested it in a real-world scenario and an extended dataset with new phenotypes beyond the training categories. The accuracy (ACC), F1 score and confusion matrix were calculated to assess the performance of IEDSS. FINDINGS IEDSS reached average ACCs (aACC) of 0·9781 (95%CI 0·9739-0·9824), 0·9660 (95%CI 0·9591-0·9730) and 0·9709 (95%CI 0·9655-0·9763), frequency-weighted average F1 scores of 0·9042 (95%CI 0·8957-0·9127), 0·8837 (95%CI 0·8714-0·8960) and 0·8874 (95%CI 0·8772-0·8972) in datasets of SAHZU, APHNU and FAHUSTC, respectively. IEDSS reached a higher aACC (0·9781, 95%CI 0·9739-0·9824) compared with a multi-class image-level model (0·9398, 95%CI 0·9329-0·9467), a classic multi-label image-level model (0·9278, 95%CI 0·9189-0·9366), a novel multi-label image-level model (0·9241, 95%CI 0·9151-0·9331) and a lesion-level model without Adaboost (0·9381, 95%CI 0·9299-0·9463). In the real-world scenario, the aACC of IEDSS (0·9872, 95%CI 0·9828-0·9915) was higher than that of the senior ophthalmologist (SO) (0·9413, 95%CI 0·9321-0·9504, p = 0·000) and the junior ophthalmologist (JO) (0·8846, 95%CI 0·8722-0·8971, p = 0·000). IEDSS remained strong performance (ACC = 0·8560, 95%CI 0·8252-0·8868) compared with JO (ACC = 0·784, 95%CI 0·7479-0·8201, p= 0·003) and SO (ACC = 0·8500, 95%CI 0·8187-0·8813, p = 0·789) in the extended dataset. INTERPRETATION IEDSS showed excellent and stable performance in identifying common eye conditions and conditions beyond the training categories. The transparency and expandability of IEDSS could tremendously increase the clinical application range and the practical clinical value of it. It would enhance the efficiency and reliability of clinical practice, especially in remote areas with a lack of experienced specialists. FUNDING National Natural Science Foundation Regional Innovation and Development Joint Fund (U20A20386), Key research and development program of Zhejiang Province (2019C03020), Clinical Medical Research Centre for Eye Diseases of Zhejiang Province (2021E50007).
Collapse
Affiliation(s)
- Jing Cao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Kun You
- Zhejiang Feitu Medical Imaging Co.,LTD, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Mingyu Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Peifang Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lei Wen
- The First Affiliated Hospital of University of Science and Technology of China, Hefei, Anhui, China
| | - Shengzhan Wang
- The Affiliated People's Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lixia Lou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Yao Wang
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
- Corresponding author at: No. 1 West Lake Avenue, Hangzhou, Zhejiang Province, China, 310009.
| |
Collapse
|
54
|
Fang H, Li F, Fu H, Sun X, Cao X, Lin F, Son J, Kim S, Quellec G, Matta S, Shankaranarayana SM, Chen YT, Wang CH, Shah NA, Lee CY, Hsu CC, Xie H, Lei B, Baid U, Innani S, Dang K, Shi W, Kamble R, Singhal N, Wang CW, Lo SC, Orlando JI, Bogunovic H, Zhang X, Xu Y. ADAM Challenge: Detecting Age-Related Macular Degeneration From Fundus Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2828-2847. [PMID: 35507621 DOI: 10.1109/tmi.2022.3172773] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Age-related macular degeneration (AMD) is the leading cause of visual impairment among elderly in the world. Early detection of AMD is of great importance, as the vision loss caused by this disease is irreversible and permanent. Color fundus photography is the most cost-effective imaging modality to screen for retinal disorders. Cutting edge deep learning based algorithms have been recently developed for automatically detecting AMD from fundus images. However, there are still lack of a comprehensive annotated dataset and standard evaluation benchmarks. To deal with this issue, we set up the Automatic Detection challenge on Age-related Macular degeneration (ADAM), which was held as a satellite event of the ISBI 2020 conference. The ADAM challenge consisted of four tasks which cover the main aspects of detecting and characterizing AMD from fundus images, including detection of AMD, detection and segmentation of optic disc, localization of fovea, and detection and segmentation of lesions. As part of the ADAM challenge, we have released a comprehensive dataset of 1200 fundus images with AMD diagnostic labels, pixel-wise segmentation masks for both optic disc and AMD-related lesions (drusen, exudates, hemorrhages and scars, among others), as well as the coordinates corresponding to the location of the macular fovea. A uniform evaluation framework has been built to make a fair comparison of different models using this dataset. During the ADAM challenge, 610 results were submitted for online evaluation, with 11 teams finally participating in the onsite challenge. This paper introduces the challenge, the dataset and the evaluation methods, as well as summarizes the participating methods and analyzes their results for each task. In particular, we observed that the ensembling strategy and the incorporation of clinical domain knowledge were the key to improve the performance of the deep learning models.
Collapse
|
55
|
Schneider L, Arsiwala-Scheppach L, Krois J, Meyer-Lueckel H, Bressem K, Niehues S, Schwendicke F. Benchmarking Deep Learning Models for Tooth Structure Segmentation. J Dent Res 2022; 101:1343-1349. [PMID: 35686357 PMCID: PMC9516600 DOI: 10.1177/00220345221100169] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
A wide range of deep learning (DL) architectures with varying depths are available, with developers usually choosing one or a few of them for their specific task in a nonsystematic way. Benchmarking (i.e., the systematic comparison of state-of-the art architectures on a specific task) may provide guidance in the model development process and may allow developers to make better decisions. However, comprehensive benchmarking has not been performed in dentistry yet. We aimed to benchmark a range of architecture designs for 1 specific, exemplary case: tooth structure segmentation on dental bitewing radiographs. We built 72 models for tooth structure (enamel, dentin, pulp, fillings, crowns) segmentation by combining 6 different DL network architectures (U-Net, U-Net++, Feature Pyramid Networks, LinkNet, Pyramid Scene Parsing Network, Mask Attention Network) with 12 encoders from 3 different encoder families (ResNet, VGG, DenseNet) of varying depth (e.g., VGG13, VGG16, VGG19). On each model design, 3 initialization strategies (ImageNet, CheXpert, random initialization) were applied, resulting overall into 216 trained models, which were trained up to 200 epochs with the Adam optimizer (learning rate = 0.0001) and a batch size of 32. Our data set consisted of 1,625 human-annotated dental bitewing radiographs. We used a 5-fold cross-validation scheme and quantified model performances primarily by the F1-score. Initialization with ImageNet or CheXpert weights significantly outperformed random initialization (P < 0.05). Deeper and more complex models did not necessarily perform better than less complex alternatives. VGG-based models were more robust across model configurations, while more complex models (e.g., from the ResNet family) achieved peak performances. In conclusion, initializing models with pretrained weights may be recommended when training models for dental radiographic analysis. Less complex model architectures may be competitive alternatives if computational resources and training time are restricting factors. Models developed and found superior on nondental data sets may not show this behavior for dental domain-specific tasks.
Collapse
Affiliation(s)
- L. Schneider
- Department of Oral Diagnostics,
Digital Health and Health Services Research, Charité–Universitätsmedizin,
Berlin, Germany
- ITU/WHO Focus Group on AI for
Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva,
Switzerland
| | - L. Arsiwala-Scheppach
- Department of Oral Diagnostics,
Digital Health and Health Services Research, Charité–Universitätsmedizin,
Berlin, Germany
- ITU/WHO Focus Group on AI for
Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva,
Switzerland
| | - J. Krois
- Department of Oral Diagnostics,
Digital Health and Health Services Research, Charité–Universitätsmedizin,
Berlin, Germany
- ITU/WHO Focus Group on AI for
Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva,
Switzerland
| | - H. Meyer-Lueckel
- Department of Restorative,
Preventive and Pediatric Dentistry, Zahnmedizinische Kliniken der
Universität Bern, University of Bern, Bern, Switzerland
| | - K.K. Bressem
- Charité–Universitätsmedizin
Berlin, Klinik für Radiologie, Berlin, Germany
- Berlin Institute of Health at
Charité–Universitätsmedizin Berlin, Berlin, Germany
| | - S.M. Niehues
- Charité–Universitätsmedizin
Berlin, Klinik für Radiologie, Berlin, Germany
| | - F. Schwendicke
- Department of Oral Diagnostics,
Digital Health and Health Services Research, Charité–Universitätsmedizin,
Berlin, Germany
- ITU/WHO Focus Group on AI for
Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva,
Switzerland
| |
Collapse
|
56
|
González-Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, Verbraak F, Sánchez CI. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2022; 90:101034. [PMID: 34902546 PMCID: PMC11696120 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
Affiliation(s)
- Cristina González-Gonzalo
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Eric F Thee
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the Netherlands; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Reinier O Schlingemann
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Frank Verbraak
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Clara I Sánchez
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, the Netherlands
| |
Collapse
|
57
|
Zhou Q, Guo J, Chen Z, Chen W, Deng C, Yu T, Li F, Yan X, Hu T, Wang L, Rong Y, Ding M, Wang J, Zhang X. Deep learning-based classification of the anterior chamber angle in glaucoma gonioscopy. BIOMEDICAL OPTICS EXPRESS 2022; 13:4668-4683. [PMID: 36187252 PMCID: PMC9484423 DOI: 10.1364/boe.465286] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/30/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
In the proposed network, the features were first extracted from the gonioscopically obtained anterior segment photographs using the densely-connected high-resolution network. Then the useful information is further strengthened using the hybrid attention module to improve the classification accuracy. Between October 30, 2020, and January 30, 2021, a total of 146 participants underwent glaucoma screening. One thousand seven hundred eighty original images of the ACA were obtained with the gonioscope and slit lamp microscope. After data augmentation, 4457 images are used for the training and validation of the HahrNet, and 497 images are used to evaluate our algorithm. Experimental results demonstrate that the proposed HahrNet exhibits a good performance of 96.2% accuracy, 99.0% specificity, 96.4% sensitivity, and 0.996 area under the curve (AUC) in classifying the ACA test dataset. Compared with several deep learning-based classification methods and nine human readers of different levels, the HahrNet achieves better or more competitive performance in terms of accuracy, specificity, and sensitivity. Indeed, the proposed ACA classification method will provide an automatic and accurate technology for the grading of glaucoma.
Collapse
Affiliation(s)
- Quan Zhou
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contribute equally to this work
| | - Jingmin Guo
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
- These authors contribute equally to this work
| | - Zhiqi Chen
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Wei Chen
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Chaohua Deng
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Tian Yu
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Fei Li
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Xiaoqin Yan
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Tian Hu
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Linhao Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Yan Rong
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Mingyue Ding
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Junming Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Xuming Zhang
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
58
|
Chen HSL, Chen GA, Syu JY, Chuang LH, Su WW, Wu WC, Liu JH, Chen JR, Huang SC, Kang EYC. Early Glaucoma Detection by Using Style Transfer to Predict Retinal Nerve Fiber Layer Thickness Distribution on the Fundus Photograph. OPHTHALMOLOGY SCIENCE 2022; 2:100180. [PMID: 36245759 PMCID: PMC9559108 DOI: 10.1016/j.xops.2022.100180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 05/16/2022] [Accepted: 06/06/2022] [Indexed: 12/03/2022]
Abstract
Objective We aimed to develop a deep learning (DL)-based algorithm for early glaucoma detection based on color fundus photographs that provides information on defects in the retinal nerve fiber layer (RNFL) and its thickness from the mapping and translating relations of spectral domain OCT (SD-OCT) thickness maps. Design Developing and evaluating an artificial intelligence detection tool. Subjects Pretraining paired data of color fundus photographs and SD-OCT images from 189 healthy participants and 371 patients with early glaucoma were used. Methods The variational autoencoder (VAE) network training architecture was used for training, and the correlation between the fundus photographs and RNFL thickness distribution was determined through the deep neural network. The reference standard was defined as a vertical cup-to-disc ratio of ≥0.7, other typical changes in glaucomatous optic neuropathy, and RNFL defects. Convergence indicates that the VAE has learned a distribution that would enable us to produce corresponding synthetic OCT scans. Main Outcome Measures Similarly to wide-field OCT scanning, the proposed model can extract the results of RNFL thickness analysis. The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were used to assess signal strength and the similarity in the structure of the color fundus images converted to an RNFL thickness distribution model. The differences between the model-generated images and original images were quantified. Results We developed and validated a novel DL-based algorithm to extract thickness information from the color space of fundus images similarly to that from OCT images and to use this information to regenerate RNFL thickness distribution images. The generated thickness map was sufficient for clinical glaucoma detection, and the generated images were similar to ground truth (PSNR: 19.31 decibels; SSIM: 0.44). The inference results were similar to the OCT-generated original images in terms of the ability to predict RNFL thickness distribution. Conclusions The proposed technique may aid clinicians in early glaucoma detection, especially when only color fundus photographs are available.
Collapse
Affiliation(s)
- Henry Shen-Lih Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Guan-An Chen
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Jhen-Yang Syu
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Lan-Hsin Chuang
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Ophthalmology, Keelung Chang Gung Memorial Hospital, Keelung, Taiwan
| | - Wei-Wen Su
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Jian-Hong Liu
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Jian-Ren Chen
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Su-Chen Huang
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
59
|
Wang J, Zhao R, Li P, Fang Z, Li Q, Han Y, Zhou R, Zhang Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. SENSORS (BASEL, SWITZERLAND) 2022; 22:6544. [PMID: 36081002 PMCID: PMC9460383 DOI: 10.3390/s22176544] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 08/22/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
Visual prostheses, used to assist in restoring functional vision to the visually impaired, convert captured external images into corresponding electrical stimulation patterns that are stimulated by implanted microelectrodes to induce phosphenes and eventually visual perception. Detecting and providing useful visual information to the prosthesis wearer under limited artificial vision has been an important concern in the field of visual prosthesis. Along with the development of prosthetic device design and stimulus encoding methods, researchers have explored the possibility of the application of computer vision by simulating visual perception under prosthetic vision. Effective image processing in computer vision is performed to optimize artificial visual information and improve the ability to restore various important visual functions in implant recipients, allowing them to better achieve their daily demands. This paper first reviews the recent clinical implantation of different types of visual prostheses, summarizes the artificial visual perception of implant recipients, and especially focuses on its irregularities, such as dropout and distorted phosphenes. Then, the important aspects of computer vision in the optimization of visual information processing are reviewed, and the possibilities and shortcomings of these solutions are discussed. Ultimately, the development direction and emphasis issues for improving the performance of visual prosthesis devices are summarized.
Collapse
Affiliation(s)
- Jing Wang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
- Key Laboratory of Fishery Information, Ministry of Agriculture, Shanghai 200335, China
| | - Rongfeng Zhao
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Peitong Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Zhiqiang Fang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Qianqian Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yanling Han
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Ruyan Zhou
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yun Zhang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| |
Collapse
|
60
|
Sun K, He M, Xu Y, Wu Q, He Z, Li W, Liu H, Pi X. Multi-label classification of fundus images with graph convolutional network and LightGBM. Comput Biol Med 2022; 149:105909. [PMID: 35998479 DOI: 10.1016/j.compbiomed.2022.105909] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 07/03/2022] [Accepted: 07/16/2022] [Indexed: 12/01/2022]
Abstract
Early detection and treatment of retinal disorders are critical for avoiding irreversible visual impairment. Given that patients in the clinical setting may have various types of retinal illness, the development of multi-label fundus disease detection models capable of screening for multiple diseases is more in line with clinical needs. This article presented a composite model based on hybrid graph convolution for patient-level multi-label fundus illness identification. The composite model comprised a backbone module, a hybrid graph convolution module, and a classifier module. This article established the relationship between labels via graph convolution and then employed a self-attention mechanism to design a hybrid graph convolution structure. The backbone module extracted features using EfficientNet-B4, whereas the classifier module output multi-label using LightGBM. Additionally, this work investigated the input pattern of binocular images and the influence of label correlation on the model's identification performance. The proposed model MCGL-Net outperformed all other state-of-the-art methods on the publicly available ODIR dataset, with F1 reaching 91.60% on the test set. Ablation experiments were also performed in this paper. Experiments showed that the idea of hybrid graph convolutional structure and composite model designed in this paper promotes the model performance under any backbone CNN. The adoption of hybrid graph convolution can increase the F1 by 2.39% in trials using EfficientNet-B4 as the backbone. The composite model had a higher F1 index by 5.42% than the single EfficientNet-B4 model.
Collapse
Affiliation(s)
- Kai Sun
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Mengjia He
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Yao Xu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Qinying Wu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Zichun He
- Chongqing Red Cross Hospital (People's Hospital of Jiangbei District), Chongqing, China
| | - Wang Li
- School of Pharmacy and Bioengineering, Chongqing University of Technology, Chongqing, China
| | - Hongying Liu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China; Chongqing Engineering Technology Research Center of Medical Electronic, Chongqing, 400030, People's Republic of China.
| | - Xitian Pi
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China; Chongqing Engineering Technology Research Center of Medical Electronic, Chongqing, 400030, People's Republic of China.
| |
Collapse
|
61
|
Sun K, He M, He Z, Liu H, Pi X. EfficientNet embedded with spatial attention for recognition of multi-label fundus disease from color fundus photographs. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
62
|
DEEP LEARNING-BASED PREDICTION OF OUTCOMES FOLLOWING NONCOMPLICATED EPIRETINAL MEMBRANE SURGERY. Retina 2022; 42:1465-1471. [PMID: 35877965 DOI: 10.1097/iae.0000000000003480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE We used deep learning to predict the final central foveal thickness (CFT), changes in CFT, final best corrected visual acuity, and best corrected visual acuity changes following noncomplicated idiopathic epiretinal membrane surgery. METHODS Data of patients who underwent noncomplicated epiretinal membrane surgery at Severance Hospital from January 1, 2010, to December 31, 2018, were reviewed. Patient age, sex, hypertension and diabetes statuses, and preoperative optical coherence tomography scans were noted. For image analysis and model development, a pre-trained VGG16 was adopted. The mean absolute error and coefficient of determination (R 2 ) were used to evaluate the model performances. The study involved 688 eyes of 657 patients. RESULTS For final CFT, the mean absolute error was the lowest in the model that considered only clinical and demographic characteristics; the highest accuracy was achieved by the model that considered all clinical and surgical information. For CFT changes, models utilizing clinical and surgical information showed the best performance. However, our best model failed to predict the final best corrected visual acuity and best corrected visual acuity changes. CONCLUSION A deep learning model predicted the final CFT and CFT changes in patients 1 year after epiretinal membrane surgery. Central foveal thickness prediction showed the best results when demographic factors, comorbid diseases, and surgical techniques were considered.
Collapse
|
63
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
64
|
Yoo TK, Ryu IH, Kim JK, Lee IS, Kim HK. A deep learning approach for detection of shallow anterior chamber depth based on the hidden features of fundus photographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106735. [PMID: 35305492 DOI: 10.1016/j.cmpb.2022.106735] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 02/15/2022] [Accepted: 03/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Patients with angle-closure glaucoma (ACG) are asymptomatic until they experience a painful attack. Shallow anterior chamber depth (ACD) is considered a significant risk factor for ACG. We propose a deep learning approach to detect shallow ACD using fundus photographs and to identify the hidden features of shallow ACD. METHODS This retrospective study assigned healthy subjects to the training (n = 1188 eyes) and test (n = 594) datasets (prospective validation design). We used a deep learning approach to estimate ACD and build a classification model to identify eyes with a shallow ACD. The proposed method, including subtraction of the input and output images of CycleGAN and a thresholding algorithm, was adopted to visualize the characteristic features of fundus photographs with a shallow ACD. RESULTS The deep learning model integrating fundus photographs and clinical variables achieved areas under the receiver operating characteristic curve of 0.978 (95% confidence interval [CI], 0.963-0.988) for an ACD ≤ 2.60 mm and 0.895 (95% CI, 0.868-0.919) for an ACD ≤ 2.80 mm, and outperformed the regression model using only clinical variables. However, the difference between shallow and deep ACD classes on fundus photographs was difficult to be detected with the naked eye. We were unable to identify the features of shallow ACD using the Grad-CAM. The CycleGAN-based feature images showed that area around the macula and optic disk significantly contributed to the classification of fundus photographs with a shallow ACD. CONCLUSIONS We demonstrated the feasibility of a novel deep learning model to detect a shallow ACD as a screening tool for ACG using fundus photographs. The CycleGAN-based feature map showed the hidden characteristic features of shallow ACD that were previously undetectable by conventional techniques and ophthalmologists. This framework will facilitate the early detection of shallow ACD to prevent overlooking the risks associated with ACG.
Collapse
Affiliation(s)
- Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea; Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | | | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
65
|
Dong L, He W, Zhang R, Ge Z, Wang YX, Zhou J, Xu J, Shao L, Wang Q, Yan Y, Xie Y, Fang L, Wang H, Wang Y, Zhu X, Wang J, Zhang C, Wang H, Wang Y, Chen R, Wan Q, Yang J, Zhou W, Li H, Yao X, Yang Z, Xiong J, Wang X, Huang Y, Chen Y, Wang Z, Rong C, Gao J, Zhang H, Wu S, Jonas JB, Wei WB. Artificial Intelligence for Screening of Multiple Retinal and Optic Nerve Diseases. JAMA Netw Open 2022; 5:e229960. [PMID: 35503220 PMCID: PMC9066285 DOI: 10.1001/jamanetworkopen.2022.9960] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
IMPORTANCE The lack of experienced ophthalmologists limits the early diagnosis of retinal diseases. Artificial intelligence can be an efficient real-time way for screening retinal diseases. OBJECTIVE To develop and prospectively validate a deep learning (DL) algorithm that, based on ocular fundus images, recognizes numerous retinal diseases simultaneously in clinical practice. DESIGN, SETTING, AND PARTICIPANTS This multicenter, diagnostic study at 65 public medical screening centers and hospitals in 19 Chinese provinces included individuals attending annual routine medical examinations and participants of population-based and community-based studies. EXPOSURES Based on 120 002 ocular fundus photographs, the Retinal Artificial Intelligence Diagnosis System (RAIDS) was developed to identify 10 retinal diseases. RAIDS was validated in a prospective collected data set, and the performance between RAIDS and ophthalmologists was compared in the data sets of the population-based Beijing Eye Study and the community-based Kailuan Eye Study. MAIN OUTCOMES AND MEASURES The performance of each classifier included sensitivity, specificity, accuracy, F1 score, and Cohen κ score. RESULTS In the prospective validation data set of 208 758 images collected from 110 784 individuals (median [range] age, 42 [8-87] years; 115 443 [55.3%] female), RAIDS achieved a sensitivity of 89.8% (95% CI, 89.5%-90.1%) to detect any of 10 retinal diseases. RAIDS differentiated 10 retinal diseases with accuracies ranging from 95.3% to 99.9%, without marked differences between medical screening centers and geographical regions in China. Compared with retinal specialists, RAIDS achieved a higher sensitivity for detection of any retinal abnormality (RAIDS, 91.7% [95% CI, 90.6%-92.8%]; certified ophthalmologists, 83.7% [95% CI, 82.1%-85.1%]; junior retinal specialists, 86.4% [95% CI, 84.9%-87.7%]; and senior retinal specialists, 88.5% [95% CI, 87.1%-89.8%]). RAIDS reached a superior or similar diagnostic sensitivity compared with senior retinal specialists in the detection of 7 of 10 retinal diseases (ie, referral diabetic retinopathy, referral possible glaucoma, macular hole, epiretinal macular membrane, hypertensive retinopathy, myelinated fibers, and retinitis pigmentosa). It achieved a performance comparable with the performance by certified ophthalmologists in 2 diseases (ie, age-related macular degeneration and retinal vein occlusion). Compared with ophthalmologists, RAIDS needed 96% to 97% less time for the image assessment. CONCLUSIONS AND RELEVANCE In this diagnostic study, the DL system was associated with accurately distinguishing 10 retinal diseases in real time. This technology may help overcome the lack of experienced ophthalmologists in underdeveloped areas.
Collapse
Affiliation(s)
- Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wanji He
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Ruiheng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Zongyuan Ge
- eResearch Centre, Monash University, Melbourne, Victoria, Australia
- ECSE, Faculty of Engineering, Monash University, Melbourne, Victoria, Australia
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jinqiong Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jie Xu
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Shao
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qian Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yanni Yan
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ying Xie
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Lijian Fang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Beijing Liangxiang Hospital, Capital Medical University, Beijing, China
| | - Haiwei Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Fuxing Hospital, Capital Medical University, Beijing, China
| | - Yenan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xiaobo Zhu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Dongfang Hospital, Beijing University of Chinese Medicine, Beijing, China
| | - Jinyuan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heng Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yining Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Rongtian Chen
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qianqian Wan
- Department of Ophthalmology, the Second Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Jingyan Yang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuan Yao
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Zhiwen Yang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | | | - Xin Wang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Yelin Huang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Zhaohui Wang
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | - Ce Rong
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | - Jianxiong Gao
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | | | - Shouling Wu
- Department of Cardiology, Kailuan General Hospital, Tangshan, Hebei, China
| | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Switzerland
| | - Wen Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
66
|
Yun JS, Kim J, Jung SH, Cha SA, Ko SH, Ahn YB, Won HH, Sohn KA, Kim D. A deep learning model for screening type 2 diabetes from retinal photographs. Nutr Metab Cardiovasc Dis 2022; 32:1218-1226. [PMID: 35197214 PMCID: PMC9018521 DOI: 10.1016/j.numecd.2022.01.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/13/2021] [Accepted: 01/08/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND AND AIMS We aimed to develop and evaluate a non-invasive deep learning algorithm for screening type 2 diabetes in UK Biobank participants using retinal images. METHODS AND RESULTS The deep learning model for prediction of type 2 diabetes was trained on retinal images from 50,077 UK Biobank participants and tested on 12,185 participants. We evaluated its performance in terms of predicting traditional risk factors (TRFs) and genetic risk for diabetes. Next, we compared the performance of three models in predicting type 2 diabetes using 1) an image-only deep learning algorithm, 2) TRFs, 3) the combination of the algorithm and TRFs. Assessing net reclassification improvement (NRI) allowed quantification of the improvement afforded by adding the algorithm to the TRF model. When predicting TRFs with the deep learning algorithm, the areas under the curve (AUCs) obtained with the validation set for age, sex, and HbA1c status were 0.931 (0.928-0.934), 0.933 (0.929-0.936), and 0.734 (0.715-0.752), respectively. When predicting type 2 diabetes, the AUC of the composite logistic model using non-invasive TRFs was 0.810 (0.790-0.830), and that for the deep learning model using only fundus images was 0.731 (0.707-0.756). Upon addition of TRFs to the deep learning algorithm, discriminative performance was improved to 0.844 (0.826-0.861). The addition of the algorithm to the TRFs model improved risk stratification with an overall NRI of 50.8%. CONCLUSION Our results demonstrate that this deep learning algorithm can be a useful tool for stratifying individuals at high risk of type 2 diabetes in the general population.
Collapse
Affiliation(s)
- Jae-Seung Yun
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jaesik Kim
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Computer Engineering, Ajou University, Suwon, Republic of Korea; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Sang-Hyuk Jung
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA; Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Seon-Ah Cha
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Hyun Ko
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yu-Bae Ahn
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Hong-Hee Won
- Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Kyung-Ah Sohn
- Department of Computer Engineering, Ajou University, Suwon, Republic of Korea; Department of Artificial Intelligence, Ajou University, Suwon, Republic of Korea.
| | - Dokyoon Kim
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
67
|
Wang TY, Chen YH, Chen JT, Liu JT, Wu PY, Chang SY, Lee YW, Su KC, Chen CL. Diabetic Macular Edema Detection Using End-to-End Deep Fusion Model and Anatomical Landmark Visualization on an Edge Computing Device. Front Med (Lausanne) 2022; 9:851644. [PMID: 35445051 PMCID: PMC9014123 DOI: 10.3389/fmed.2022.851644] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/14/2022] [Indexed: 11/23/2022] Open
Abstract
Purpose Diabetic macular edema (DME) is a common cause of vision impairment and blindness in patients with diabetes. However, vision loss can be prevented by regular eye examinations during primary care. This study aimed to design an artificial intelligence (AI) system to facilitate ophthalmology referrals by physicians. Methods We developed an end-to-end deep fusion model for DME classification and hard exudate (HE) detection. Based on the architecture of fusion model, we also applied a dual model which included an independent classifier and object detector to perform these two tasks separately. We used 35,001 annotated fundus images from three hospitals between 2007 and 2018 in Taiwan to create a private dataset. The Private dataset, Messidor-1 and Messidor-2 were used to assess the performance of the fusion model for DME classification and HE detection. A second object detector was trained to identify anatomical landmarks (optic disc and macula). We integrated the fusion model and the anatomical landmark detector, and evaluated their performance on an edge device, a device with limited compute resources. Results For DME classification of our private testing dataset, Messidor-1 and Messidor-2, the area under the receiver operating characteristic curve (AUC) for the fusion model had values of 98.1, 95.2, and 95.8%, the sensitivities were 96.4, 88.7, and 87.4%, the specificities were 90.1, 90.2, and 90.2%, and the accuracies were 90.8, 90.0, and 89.9%, respectively. In addition, the AUC was not significantly different for the fusion and dual models for the three datasets (p = 0.743, 0.942, and 0.114, respectively). For HE detection, the fusion model achieved a sensitivity of 79.5%, a specificity of 87.7%, and an accuracy of 86.3% using our private testing dataset. The sensitivity of the fusion model was higher than that of the dual model (p = 0.048). For optic disc and macula detection, the second object detector achieved accuracies of 98.4% (optic disc) and 99.3% (macula). The fusion model and the anatomical landmark detector can be deployed on a portable edge device. Conclusion This portable AI system exhibited excellent performance for the classification of DME, and the visualization of HE and anatomical locations. It facilitates interpretability and can serve as a clinical reference for physicians. Clinically, this system could be applied to diabetic eye screening to improve the interpretation of fundus imaging in patients with DME.
Collapse
Affiliation(s)
- Ting-Yuan Wang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yi-Hao Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jiann-Torng Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jung-Tzu Liu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Po-Yi Wu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Sung-Yen Chang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ya-Wen Lee
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Kuo-Chen Su
- Department of Optometry, Chung Shan Medical University, Taichung, Taiwan
| | - Ching-Long Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
68
|
Yang D, Li M, Li W, Wang Y, Niu L, Shen Y, Zhang X, Fu B, Zhou X. Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients. Front Med (Lausanne) 2022; 9:834281. [PMID: 35433763 PMCID: PMC9007166 DOI: 10.3389/fmed.2022.834281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/04/2022] [Indexed: 11/21/2022] Open
Abstract
Summary Ultrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power. Purpose To explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images. Methods UWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel. Results ResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map. Conclusions It was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved.
Collapse
Affiliation(s)
- Danjuan Yang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Meiyan Li
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Weizhen Li
- School of Data Science, Fudan University, Shanghai, China
| | - Yunzhe Wang
- Shanghai Medical College, Fudan University, Shanghai, China
| | - Lingling Niu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Yang Shen
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Xiaoyu Zhang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Bo Fu
- School of Data Science, Fudan University, Shanghai, China
- Bo Fu
| | - Xingtao Zhou
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
- *Correspondence: Xingtao Zhou
| |
Collapse
|
69
|
Li Y, Zhu M, Sun G, Chen J, Zhu X, Yang J. Weakly supervised training for eye fundus lesion segmentation in patients with diabetic retinopathy. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:5293-5311. [PMID: 35430865 DOI: 10.3934/mbe.2022248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE Diabetic retinopathy is the leading cause of vision loss in working-age adults. Early screening and diagnosis can help to facilitate subsequent treatment and prevent vision loss. Deep learning has been applied in various fields of medical identification. However, current deep learning-based lesion segmentation techniques rely on a large amount of pixel-level labeled ground truth data, which limits their performance and application. In this work, we present a weakly supervised deep learning framework for eye fundus lesion segmentation in patients with diabetic retinopathy. METHODS First, an efficient segmentation algorithm based on grayscale and morphological features is proposed for rapid coarse segmentation of lesions. Then, a deep learning model named Residual-Attention Unet (RAUNet) is proposed for eye fundus lesion segmentation. Finally, a data sample of fundus images with labeled lesions and unlabeled images with coarse segmentation results is jointly used to train RAUNet to broaden the diversity of lesion samples and increase the robustness of the segmentation model. RESULTS A dataset containing 582 fundus images with labels verified by doctors, including hemorrhage (HE), microaneurysm (MA), hard exudate (EX) and soft exudate (SE), and 903 images without labels was used to evaluate the model. In ablation test, the proposed RAUNet achieved the highest intersection over union (IOU) on the labeled dataset, and the proposed attention and residual modules both improved the IOU of the UNet benchmark. Using both the images labeled by doctors and the proposed coarse segmentation method, the weakly supervised framework based on RAUNet architecture significantly improved the mean segmentation accuracy by over 7% on the lesions. SIGNIFICANCE This study demonstrates that combining unlabeled medical images with coarse segmentation results can effectively improve the robustness of the lesion segmentation model and proposes a practical framework for improving the performance of medical image segmentation given limited labeled data samples.
Collapse
Affiliation(s)
- Yu Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Meilong Zhu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Guangmin Sun
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jiayang Chen
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- School of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaorong Zhu
- Beijing Tongren Hospital, Beijing 100730, China
- Beijing Institute of Diabetes Research, Beijing 100730, China
| | - Jinkui Yang
- Beijing Tongren Hospital, Beijing 100730, China
- Beijing Institute of Diabetes Research, Beijing 100730, China
| |
Collapse
|
70
|
Woo JH, Kim EC, Kim SM. The Current Status of Breakthrough Devices Designation in the United States and Innovative Medical Devices Designation in Korea for Digital Health Software. Expert Rev Med Devices 2022; 19:213-228. [PMID: 35255755 DOI: 10.1080/17434440.2022.2051479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
INTRODUCTION Artificial Intelligence (AI) is becoming increasingly utilized in the medical device industry as it can address unmet demands in clinical sites and provide more patient treatment options. This study aims to analyze the FDA's Breakthrough Device Program and MFDS' Innovative Medical Device Program, which support regulatory science for innovative medical devices today. Through this study, it is intended to enable prediction of current development trends of Software as a Medical Device (SaMD) and Digital Therapeutics (DTx), which combine AI and technologies to be used in the clinical field soon. AREAS COVERED A systematic search was conducted on the broad topics of "FDA and MFDS Program's SaMD, DTx". A parallel review and update of PubMed, and the official websites were conducted to investigate the regulator's databases, review official press releases of regulatory agencies, and provide detailed descriptions of researchers. EXPERT OPINION The efforts of related stakeholders are needed to expand AI technology to diagnosis, prevention, and treatment technologies for diseases that are difficult to diagnose early or are classified as clinical challenges. It is important to prepare regulatory policies suitable for the rapid pace of technological development and to create an environment where regulatory science can be realized by developers.
Collapse
Affiliation(s)
- Jae Hyun Woo
- Research Institute for Commercialization of Biomedical Convergence Technology, Seoul, Republic of Korea.,Medical Device Industry Program in Graduate School, Dongguk University, Seoul, Republic of Korea.,National Institute of Medical Device Safety Information, Seoul, Republic of Korea.,Department of Medical Biotechnology, Dongguk University-Seoul, Seoul, Korea
| | - Eun Cheol Kim
- Research Institute for Commercialization of Biomedical Convergence Technology, Seoul, Republic of Korea.,Medical Device Industry Program in Graduate School, Dongguk University, Seoul, Republic of Korea.,National Institute of Medical Device Safety Information, Seoul, Republic of Korea.,Department of Medical Biotechnology, Dongguk University-Seoul, Seoul, Korea
| | - Sung Min Kim
- Research Institute for Commercialization of Biomedical Convergence Technology, Seoul, Republic of Korea.,Medical Device Industry Program in Graduate School, Dongguk University, Seoul, Republic of Korea.,National Institute of Medical Device Safety Information, Seoul, Republic of Korea.,Department of Medical Biotechnology, Dongguk University-Seoul, Seoul, Korea
| |
Collapse
|
71
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Ricquebourg V, Benyoussef AA, Massin P, Rottier JB, Cochener B, Quellec G. Automatic Screening for Ocular Anomalies Using Fundus Photographs. Optom Vis Sci 2022; 99:281-291. [PMID: 34897234 DOI: 10.1097/opx.0000000000001845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE Screening for ocular anomalies using fundus photography is key to prevent vision impairment and blindness. With the growing and aging population, automated algorithms that can triage fundus photographs and provide instant referral decisions are relevant to scale-up screening and face the shortage of ophthalmic expertise. PURPOSE This study aimed to develop a deep learning algorithm that detects any ocular anomaly in fundus photographs and to evaluate this algorithm for "normal versus anomalous" eye examination classification in the diabetic and general populations. METHODS The deep learning algorithm was developed and evaluated in two populations: the diabetic and general populations. Our patient cohorts consist of 37,129 diabetic patients from the OPHDIAT diabetic retinopathy screening network in Paris, France, and 7356 general patients from the OphtaMaine private screening network, in Le Mans, France. Each data set was divided into a development subset and a test subset of more than 4000 examinations each. For ophthalmologist/algorithm comparison, a subset of 2014 examinations from the OphtaMaine test subset was labeled by a second ophthalmologist. First, the algorithm was trained on the OPHDIAT development subset. Then, it was fine-tuned on the OphtaMaine development subset. RESULTS On the OPHDIAT test subset, the area under the receiver operating characteristic curve for normal versus anomalous classification was 0.9592. On the OphtaMaine test subset, the area under the receiver operating characteristic curve was 0.8347 before fine-tuning and 0.9108 after fine-tuning. On the ophthalmologist/algorithm comparison subset, the second ophthalmologist achieved a specificity of 0.8648 and a sensitivity of 0.6682. For the same specificity, the fine-tuned algorithm achieved a sensitivity of 0.8248. CONCLUSIONS The proposed algorithm compares favorably with human performance for normal versus anomalous eye examination classification using fundus photography. Artificial intelligence, which previously targeted a few retinal pathologies, can be used to screen for ocular anomalies comprehensively.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Pascale Massin
- Ophtalmology Department, Lariboisière Hospital, APHP, Paris, France
| | | | | | | |
Collapse
|
72
|
Abitbol E, Miere A, Excoffier JB, Mehanna CJ, Amoroso F, Kerr S, Ortala M, Souied EH. Deep learning-based classification of retinal vascular diseases using ultra-widefield colour fundus photographs. BMJ Open Ophthalmol 2022; 7:e000924. [PMID: 35141420 PMCID: PMC8819815 DOI: 10.1136/bmjophth-2021-000924] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 01/18/2022] [Indexed: 01/01/2023] Open
Abstract
Objective To assess the ability of a deep learning model to distinguish between diabetic retinopathy (DR), sickle cell retinopathy (SCR), retinal vein occlusions (RVOs) and healthy eyes using ultra-widefield colour fundus photography (UWF-CFP). Methods and Analysis In this retrospective study, UWF-CFP images of patients with retinal vascular disease (DR, RVO, and SCR) and healthy controls were included. The images were used to train a multilayer deep convolutional neural network to differentiate on UWF-CFP between different vascular diseases and healthy controls. A total of 224 UWF-CFP images were included, of which 169 images were of retinal vascular diseases and 55 were healthy controls. A cross-validation technique was used to ensure that every image from the dataset was tested once. Established augmentation techniques were applied to enhance performances, along with an Adam optimiser for training. The visualisation method was integrated gradient visualisation. Results The best performance of the model was obtained using 10 epochs, with an overall accuracy of 88.4%. For DR, the area under the receiver operating characteristics (ROC) curve (AUC) was 90.5% and the accuracy was 85.2%. For RVO, the AUC was 91.2% and the accuracy 88.4%. For SCR, the AUC was 96.7% and the accuracy 93.8%. For healthy controls, the ROC was 88.5% with an accuracy that reached 86.2%. Conclusion Deep learning algorithms can classify several retinal vascular diseases on UWF-CPF with good accuracy. This technology may be a useful tool for telemedicine and areas with a shortage of ophthalmic care.
Collapse
Affiliation(s)
- Elie Abitbol
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| | - Alexandra Miere
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| | | | - Carl-Joe Mehanna
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| | - Francesca Amoroso
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| | | | | | - Eric H Souied
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| |
Collapse
|
73
|
End-to-end diabetic retinopathy grading based on fundus fluorescein angiography images using deep learning. Graefes Arch Clin Exp Ophthalmol 2022; 260:1663-1673. [DOI: 10.1007/s00417-021-05503-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 10/11/2021] [Accepted: 11/14/2021] [Indexed: 12/14/2022] Open
|
74
|
Zhang RH, Liu YM, Dong L, Li HY, Li YF, Zhou WD, Wu HT, Wang YX, Wei WB. Prevalence, Years Lived With Disability, and Time Trends for 16 Causes of Blindness and Vision Impairment: Findings Highlight Retinopathy of Prematurity. Front Pediatr 2022; 10:735335. [PMID: 35359888 PMCID: PMC8962664 DOI: 10.3389/fped.2022.735335] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 01/25/2022] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Cause-specific prevalence data of vision loss and blindness is fundamental for making public health policies and is essential for prioritizing scientific advances and industry research. METHODS Cause-specific vision loss data from the Global Health Data Exchange was used. The burden of vision loss was measured by prevalence and years lived with disability (YLDs). FINDINGS In 2019, uncorrected refractory error and cataract were the most common causes for vision loss and blindness globally. Women have higher rates of cataract, age-related macular degeneration (AMD), and diabetic retinopathy (DR) than men. In the past 30 years, the prevalence of moderate/severe vision loss and blindness due to neonatal disorders has increased by 13.73 and 33.53%, respectively. Retinopathy of prematurity (ROP) is the major cause of neonatal disorders related vision loss. In 2019, ROP caused 101.6 thousand [95% uncertainty intervals (UI) 77.5-128.2] cases of vision impairment, including 49.1 thousand (95% UI 28.1-75.1) moderate vision loss, 27.5 thousand (95% UI 19.3-36.60) severe vision loss and, 25.0 thousand (95% UI 14.6-35.8) blindness. The prevalence of new-onset ROP in Africa and East Asia was significantly higher than other regions. Variation of preterm birth prevalence can explain 49.8% geometry variation of ROP-related vision loss burden among 204 countries and territories. After adjusting for preterm prevalence, government health spending per total health spending (%), rather than total health spending per person, was associated with a reduced burden of ROP-related vision loss in 2019 (-0.19 YLDs for 10% increment). By 2050, prevalence of moderate, severe vision loss and blindness due to ROP is expected to reach 43.6 (95% UI 35.1-52.0), 23.2 (95% UI 19.4-27.1), 31.9 (95% UI 29.7-34.1) per 100,000 population. CONCLUSION The global burden of vision loss and blindness highlights the prevalent of ROP, a major and avoidable cause for childhood vision loss. Advanced screening techniques and treatments have shown to be effective in preventing ROP-related vision loss and are urgently needed in regions with high ROP-related blindness rates, including Africa and East Asia.
Collapse
Affiliation(s)
- Rui-Heng Zhang
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yue-Ming Liu
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - He-Yan Li
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yi-Fan Li
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wen-Da Zhou
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Hao-Tian Wu
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ya-Xing Wang
- Beijing Institute of Ophthalmology and Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wen-Bin Wei
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
75
|
Nancy W, Celine Kavida A. Optimized Ensemble Machine Learning-Based Diabetic Retinopathy Grading Using Multiple Region of Interest Analysis and Bayesian Approach. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Diabetic Retinopathy (DR) is a critical abnormality in the retina mainly caused by diabetes. The early diagnosis of DR is essential to avoid painless blindness. The conventional DR diagnosis is manual and requires skilled Ophthalmologists. The Ophthalmologist’s analyses are subjective
to inconsistency and record maintenance issues. Hence, there is a need for other DR diagnosis methods. In this paper, we proposed an AdaBoost algorithm-based ensemble classification approach to classify DR grades. The major objective of the proposed approach is an enhancement of DR classification
performance by using optimized features and ensemble machine learning techniques. The proposed method classifies different grades of DR using the Meyer wavelet and retinal vessel-based features extracted from multiple regions of interest of the retina. To improve the predictive accuracy, we
used a Bayesian algorithm to optimize the hyper-parameters of the proposed ensemble classifier. The proposed DR grading model was constructed and evaluated by using the MESSIDOR fundus image dataset. In evaluation experiment, the classification outcome of the proposed approach was evaluated
by the confusion matrix and receiver operating characteristic (ROC) based metrics. The evaluation experiments show that the proposed approach attained 99.2% precision, 98.2% recall, 99% accuracy, and 0.99 AUC. The experimental findings also indicate that the proposed approach’s classification
outcome is significantly better than that of state of art DR classification methods.
Collapse
Affiliation(s)
- W. Nancy
- Department of Electronics and Communication Engineering, Jeppiaar Institute of Technology, Chennai 631604, India
| | - A. Celine Kavida
- Department of Physics, Vel Tech Multi Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Chennai 600062, India
| |
Collapse
|
76
|
AIM in Endocrinology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
77
|
Ma Y, Xiong J, Zhu Y, Ge Z, Hua R, Fu M, Li C, Wang B, Dong L, Zhao X, Chen J, Rong C, He C, Chen Y, Wang Z, Wei W, Xie W, Wu Y. Deep learning algorithm using fundus photographs for 10-year risk assessment of ischemic cardiovascular diseases in China. Sci Bull (Beijing) 2022; 67:17-20. [PMID: 36545953 DOI: 10.1016/j.scib.2021.08.016] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/13/2021] [Accepted: 08/23/2021] [Indexed: 01/06/2023]
Affiliation(s)
- Yanjun Ma
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China
| | - Jianhao Xiong
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Yidan Zhu
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China
| | - Zongyuan Ge
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Rong Hua
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China
| | - Meng Fu
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Chenglong Li
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing 100005, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Jili Chen
- Shibei Hospital, Shanghai 200435, China
| | - Ce Rong
- iKang Guobin Healthcare Group Co., Ltd., Beijing 100022, China
| | - Chao He
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Zhaohui Wang
- iKang Guobin Healthcare Group Co., Ltd., Beijing 100022, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing 100005, China
| | - Wuxiang Xie
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China.
| | - Yangfeng Wu
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China.
| |
Collapse
|
78
|
Shah PM, Ullah F, Shah D, Gani A, Maple C, Wang Y, Abrar M, Islam SU. Deep GRU-CNN Model for COVID-19 Detection From Chest X-Rays Data. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2022; 10:35094-35105. [PMID: 35582498 PMCID: PMC9088790 DOI: 10.1109/access.2021.3077592] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Accepted: 04/20/2021] [Indexed: 05/03/2023]
Abstract
In the current era, data is growing exponentially due to advancements in smart devices. Data scientists apply a variety of learning-based techniques to identify underlying patterns in the medical data to address various health-related issues. In this context, automated disease detection has now become a central concern in medical science. Such approaches can reduce the mortality rate through accurate and timely diagnosis. COVID-19 is a modern virus that has spread all over the world and is affecting millions of people. Many countries are facing a shortage of testing kits, vaccines, and other resources due to significant and rapid growth in cases. In order to accelerate the testing process, scientists around the world have sought to create novel methods for the detection of the virus. In this paper, we propose a hybrid deep learning model based on a convolutional neural network (CNN) and gated recurrent unit (GRU) to detect the viral disease from chest X-rays (CXRs). In the proposed model, a CNN is used to extract features, and a GRU is used as a classifier. The model has been trained on 424 CXR images with 3 classes (COVID-19, Pneumonia, and Normal). The proposed model achieves encouraging results of 0.96, 0.96, and 0.95 in terms of precision, recall, and f1-score, respectively. These findings indicate how deep learning can significantly contribute to the early detection of COVID-19 in patients through the analysis of X-ray scans. Such indications can pave the way to mitigate the impact of the disease. We believe that this model can be an effective tool for medical practitioners for early diagnosis.
Collapse
Affiliation(s)
- Pir Masoom Shah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
- School of Computer ScienceWuhan University Wuhan 430072 China
| | - Faizan Ullah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
| | - Dilawar Shah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
| | - Abdullah Gani
- Faculty of Computer Science and Information TechnologyUniversity of Malaya Kuala Lumpur 50603 Malaysia
- Faculty of Computing and InformaticsUniversity Malaysia Sabah Labuan 88400 Malaysia
| | - Carsten Maple
- Secure Cyber Systems Research Group, WMGUniversity of Warwick Coventry CV4 7AL U.K
- Alan Turing Institute London NW1 2DB U.K
| | - Yulin Wang
- School of Computer ScienceWuhan University Wuhan 430072 China
| | - Mohammad Abrar
- Department of Computer ScienceMohi-ud-Din Islamic University Nerian Sharif 12080 Pakistan
| | - Saif Ul Islam
- Department of Computer ScienceInstitute of Space Technology Islamabad 44000 Pakistan
| |
Collapse
|
79
|
Choi KJ, Choi JE, Roh HC, Eun JS, Kim JM, Shin YK, Kang MC, Chung JK, Lee C, Lee D, Kang SW, Cho BH, Kim SJ. Deep learning models for screening of high myopia using optical coherence tomography. Sci Rep 2021; 11:21663. [PMID: 34737335 PMCID: PMC8568935 DOI: 10.1038/s41598-021-00622-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 10/13/2021] [Indexed: 12/02/2022] Open
Abstract
This study aimed to validate and evaluate deep learning (DL) models for screening of high myopia using spectral-domain optical coherence tomography (OCT). This retrospective cross-sectional study included 690 eyes in 492 patients with OCT images and axial length measurement. Eyes were divided into three groups based on axial length: a “normal group,” a “high myopia group,” and an “other retinal disease” group. The researchers trained and validated three DL models to classify the three groups based on horizontal and vertical OCT images of the 600 eyes. For evaluation, OCT images of 90 eyes were used. Diagnostic agreements of human doctors and DL models were analyzed. The area under the receiver operating characteristic curve of the three DL models was evaluated. Absolute agreement of retina specialists was 99.11% (range: 97.78–100%). Absolute agreement of the DL models with multiple-column model was 100.0% (ResNet 50), 90.0% (Inception V3), and 72.22% (VGG 16). Areas under the receiver operating characteristic curves of the DL models with multiple-column model were 0.99 (ResNet 50), 0.97 (Inception V3), and 0.86 (VGG 16). The DL model based on ResNet 50 showed comparable diagnostic performance with retinal specialists. The DL model using OCT images demonstrated reliable diagnostic performance to identify high myopia.
Collapse
Affiliation(s)
- Kyung Jun Choi
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jung Eun Choi
- Medical AI Research Center, Samsung Medical Center, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Hyeon Cheol Roh
- Department of Ophthalmology, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Changwon, Republic of Korea
| | - Jun Soo Eun
- Department of Ophthalmology, Gil Medical Center, Gachon University, Incheon, Republic of Korea
| | | | - Yong Kyun Shin
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Min Chae Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Joon Kyo Chung
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Chaeyeon Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dongyoung Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Se Woong Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Baek Hwan Cho
- Medical AI Research Center, Samsung Medical Center, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea. .,Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, 06351, Republic of Korea.
| | - Sang Jin Kim
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| |
Collapse
|
80
|
Jahangir S, Khan HA. Artificial intelligence in ophthalmology and visual sciences: Current implications and future directions. Artif Intell Med Imaging 2021; 2:95-103. [DOI: 10.35711/aimi.v2.i5.95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/30/2021] [Accepted: 10/27/2021] [Indexed: 02/06/2023] Open
Abstract
Since its inception in 1959, artificial intelligence (AI) has evolved at an unprecedented rate and has revolutionized the world of medicine. Ophthalmology, being an image-driven field of medicine, is well-suited for the implementation of AI. Machine learning (ML) and deep learning (DL) models are being utilized for screening of vision threatening ocular conditions of the eye. These models have proven to be accurate and reliable for diagnosing anterior and posterior segment diseases, screening large populations, and even predicting the natural course of various ocular morbidities. With the increase in population and global burden of managing irreversible blindness, AI offers a unique solution when implemented in clinical practice. In this review, we discuss what are AI, ML, and DL, their uses, future direction for AI, and its limitations in ophthalmology.
Collapse
Affiliation(s)
- Smaha Jahangir
- School of Optometry, The University of Faisalabad, Faisalabad, Punjab 38000, Pakistan
| | - Hashim Ali Khan
- Department of Ophthalmology, SEHHAT Foundation, Gilgit 15100, Gilgit-Baltistan, Pakistan
| |
Collapse
|
81
|
Sharmila C, Shanthi N. An Effective Approach Based on Deep Residual Google Net Convolutional Neural Network Classifier for the Detection of Glaucoma. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Glaucoma is a disease caused by fluid pressure build-up in the inner eye. Early detection of glaucoma is critical as it is expected that 111.8 million people worldwide shall suffer from glaucoma in 2040. In the diagnosis of glaucoma, the use of machine learning method is hoped to be
highly promising. This paper provides an important method to master learning to diagnose glaucoma. Initially, human retinal fundus images are preprocessed by means of histogram equalization in order to enhance them. The segmentation is performed by semantic segmentation method, mainly the
features are extracted using density with correlation based feature extraction approach. PCA (principal component analysis) methodology is used to choose the most optimal features. Ultimately, through the usage of the Deep residual Google Net CNN Classification method, the retinal image is
classified/predicted as regular and abnormal. The Deep residual Google Net CNN classifier is designed to distinguish view patterns with minimal pre-processing from pixel pictures. ORIGA and STARE datasets are used in this work. The findings are then analyzed and contrasted to illustrate the
efficacy of the new technique with alternate current techniques. Test accuracy of 99%, Specificity of 98.9% and 100% Sensitivity were achieved. The quantitative results are analyzed for specifications like sensitivity, specificity, accuracy, positive predictive rate, false predictive rate
and assured to provide excellent outcomes when compared with traditional methods.
Collapse
Affiliation(s)
- C. Sharmila
- Information Technology, Excel Engineering College, Komarapalayam, Namakkal 637303, India
| | - N. Shanthi
- Computer Science Engineering, Kongu Engineering College, Perundurai, Erode 638060, India
| |
Collapse
|
82
|
Buisson M, Navel V, Labbé A, Watson SL, Baker JS, Murtagh P, Chiambaretta F, Dutheil F. Deep learning versus ophthalmologists for screening for glaucoma on fundus examination: A systematic review and meta-analysis. Clin Exp Ophthalmol 2021; 49:1027-1038. [PMID: 34506041 DOI: 10.1111/ceo.14000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In this systematic review and meta-analysis, we aimed to compare deep learning versus ophthalmologists in glaucoma diagnosis on fundus examinations. METHOD PubMed, Cochrane, Embase, ClinicalTrials.gov and ScienceDirect databases were searched for studies reporting a comparison between the glaucoma diagnosis performance of deep learning and ophthalmologists on fundus examinations on the same datasets, until 10 December 2020. Studies had to report an area under the receiver operating characteristics (AUC) with SD or enough data to generate one. RESULTS We included six studies in our meta-analysis. There was no difference in AUC between ophthalmologists (AUC = 82.0, 95% confidence intervals [CI] 65.4-98.6) and deep learning (97.0, 89.4-104.5). There was also no difference using several pessimistic and optimistic variants of our meta-analysis: the best (82.2, 60.0-104.3) or worst (77.7, 53.1-102.3) ophthalmologists versus the best (97.1, 89.5-104.7) or worst (97.1, 88.5-105.6) deep learning of each study. We did not retrieve any factors influencing those results. CONCLUSION Deep learning had similar performance compared to ophthalmologists in glaucoma diagnosis from fundus examinations. Further studies should evaluate deep learning in clinical situations.
Collapse
Affiliation(s)
- Mathieu Buisson
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France
| | - Valentin Navel
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Antoine Labbé
- Department of Ophthalmology III, Quinze-Vingts National Ophthalmology Hospital, IHU FOReSIGHT, Paris, France.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Ophthalmology, Ambroise Paré Hospital, APHP, Université de Versailles Saint-Quentin en Yvelines, Versailles, France
| | - Stephanie L Watson
- Save Sight Institute, Discipline of Ophthalmology, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Corneal Unit, Sydney Eye Hospital, Sydney, New South Wales, Australia
| | - Julien S Baker
- Centre for Health and Exercise Science Research, Department of Sport, Physical Education and Health, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Patrick Murtagh
- Department of Ophthalmology, Royal Victoria Eye and Ear Hospital, Dublin, Ireland
| | - Frédéric Chiambaretta
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, Physiological and Psychosocial Stress, CHU Clermont-Ferrand, University Hospital of Clermont-Ferrand, Preventive and Occupational Medicine, Witty Fit, Clermont-Ferrand, France
| |
Collapse
|
83
|
Abstract
PURPOSE OF REVIEW Systemic retinal biomarkers are biomarkers identified in the retina and related to evaluation and management of systemic disease. This review summarizes the background, categories and key findings from this body of research as well as potential applications to clinical care. RECENT FINDINGS Potential systemic retinal biomarkers for cardiovascular disease, kidney disease and neurodegenerative disease were identified using regression analysis as well as more sophisticated image processing techniques. Deep learning techniques were used in a number of studies predicting diseases including anaemia and chronic kidney disease. A virtual coronary artery calcium score performed well against other competing traditional models of event prediction. SUMMARY Systemic retinal biomarker research has progressed rapidly using regression studies with clearly identified biomarkers such as retinal microvascular patterns, as well as using deep learning models. Future systemic retinal biomarker research may be able to boost performance using larger data sets, the addition of meta-data and higher resolution image inputs.
Collapse
|
84
|
Lin D, Xiong J, Liu C, Zhao L, Li Z, Yu S, Wu X, Ge Z, Hu X, Wang B, Fu M, Zhao X, Wang X, Zhu Y, Chen C, Li T, Li Y, Wei W, Zhao M, Li J, Xu F, Ding L, Tan G, Xiang Y, Hu Y, Zhang P, Han Y, Li JPO, Wei L, Zhu P, Liu Y, Chen W, Ting DSW, Wong TY, Chen Y, Lin H. Application of Comprehensive Artificial intelligence Retinal Expert (CARE) system: a national real-world evidence study. LANCET DIGITAL HEALTH 2021; 3:e486-e495. [PMID: 34325853 DOI: 10.1016/s2589-7500(21)00086-8] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 04/21/2021] [Accepted: 05/07/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted. METHODS In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FINDINGS The area under the receiver operating characteristic curve (AUC) in the internal validation set was 0·955 (SD 0·046). AUC values in the external test set were 0·965 (0·035) in tertiary hospitals, 0·983 (0·031) in community hospitals, and 0·953 (0·042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0·960, 95% CI 0·957-0·964 in referable diabetic retinopathy). INTERPRETATION Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care. FUNDING This study was funded by the National Key R&D Programme of China, the Science and Technology Planning Projects of Guangdong Province, the National Natural Science Foundation of China, the Natural Science Foundation of Guangdong Province, and the Fundamental Research Funds for the Central Universities. TRANSLATION For the Chinese translation of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jianhao Xiong
- Beijing Eaglevision Technology Development, Beijing, China
| | - Congxin Liu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zongyuan Ge
- Department of Electrical and Computer Systems Engineering, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Xinyue Hu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Bin Wang
- Beijing Eaglevision Technology Development, Beijing, China
| | - Meng Fu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Xin Zhao
- Beijing Eaglevision Technology Development, Beijing, China
| | - Xin Wang
- Centre for Precision Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Tao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yonghao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Wenbin Wei
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Mingwei Zhao
- Department of Ophthalmology, Ophthalmology and Optometry Centre, Peking University People's Hospital, Beijing, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital of Shandong University, Jinan, Shandong, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Lin Ding
- Department of Ophthalmology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Shanxi, China
| | - Gang Tan
- Department of Ophthalmology, University of South China, Hengyang, Hunan, China
| | - Yi Xiang
- Department of Ophthalmology, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Yongcheng Hu
- Bayannur Paralympic Eye Hospital, Bayannur, Inner Mongolia, China
| | - Ping Zhang
- Bayannur Paralympic Eye Hospital, Bayannur, Inner Mongolia, China
| | - Yu Han
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | | | - Lai Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Pengzhi Zhu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, Guangdong, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Weirong Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Daniel S W Ting
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yuzhong Chen
- Beijing Eaglevision Technology Development, Beijing, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China; Centre for Precision Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
85
|
Nuzzi R, Boscia G, Marolo P, Ricardi F. The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review. Front Med (Lausanne) 2021; 8:710329. [PMID: 34527682 PMCID: PMC8437147 DOI: 10.3389/fmed.2021.710329] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 07/23/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology.
Collapse
Affiliation(s)
- Raffaele Nuzzi
- Ophthalmology Unit, A.O.U. City of Health and Science of Turin, Department of Surgical Sciences, University of Turin, Turin, Italy
| | | | | | | |
Collapse
|
86
|
Cen LP, Ji J, Lin JW, Ju ST, Lin HJ, Li TP, Wang Y, Yang JF, Liu YF, Tan S, Tan L, Li D, Wang Y, Zheng D, Xiong Y, Wu H, Jiang J, Wu Z, Huang D, Shi T, Chen B, Yang J, Zhang X, Luo L, Huang C, Zhang G, Huang Y, Ng TK, Chen H, Chen W, Pang CP, Zhang M. Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nat Commun 2021; 12:4828. [PMID: 34376678 PMCID: PMC8355164 DOI: 10.1038/s41467-021-25138-w] [Citation(s) in RCA: 97] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 07/22/2021] [Indexed: 02/05/2023] Open
Abstract
Retinal fundus diseases can lead to irreversible visual impairment without timely diagnoses and appropriate treatments. Single disease-based deep learning algorithms had been developed for the detection of diabetic retinopathy, age-related macular degeneration, and glaucoma. Here, we developed a deep learning platform (DLP) capable of detecting multiple common referable fundus diseases and conditions (39 classes) by using 249,620 fundus images marked with 275,543 labels from heterogenous sources. Our DLP achieved a frequency-weighted average F1 score of 0.923, sensitivity of 0.978, specificity of 0.996 and area under the receiver operating characteristic curve (AUC) of 0.9984 for multi-label classification in the primary test dataset and reached the average level of retina specialists. External multihospital test, public data test and tele-reading application also showed high efficiency for multiple retinal diseases and conditions detection. These results indicate that our DLP can be applied for retinal fundus disease triage, especially in remote areas around the world.
Collapse
Affiliation(s)
- Ling-Ping Cen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network & Information Centre, Shantou University, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- XuanShi Med Tech (Shanghai) Company Limited, Shanghai, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Si-Tong Ju
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Hong-Jie Lin
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tai-Ping Li
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yun Wang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Feng Yang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yu-Fen Liu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Shaoying Tan
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Li Tan
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dongjie Li
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yifan Wang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dezhi Zheng
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yongqun Xiong
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Hanfu Wu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jingjing Jiang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zhenggen Wu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dingguo Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tingkun Shi
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Binyao Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jianling Yang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Xiaoling Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Li Luo
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chukai Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Guihua Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yuqiang Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Haoyu Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Weiqi Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Mingzhi Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China.
| |
Collapse
|
87
|
Avilés-Rodríguez GJ, Nieto-Hipólito JI, Cosío-León MDLÁ, Romo-Cárdenas GS, Sánchez-López JDD, Radilla-Chávez P, Vázquez-Briseño M. Topological Data Analysis for Eye Fundus Image Quality Assessment. Diagnostics (Basel) 2021; 11:1322. [PMID: 34441257 PMCID: PMC8394537 DOI: 10.3390/diagnostics11081322] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 11/29/2022] Open
Abstract
The objective of this work is to perform image quality assessment (IQA) of eye fundus images in the context of digital fundoscopy with topological data analysis (TDA) and machine learning methods. Eye health remains inaccessible for a large amount of the global population. Digital tools that automize the eye exam could be used to address this issue. IQA is a fundamental step in digital fundoscopy for clinical applications; it is one of the first steps in the preprocessing stages of computer-aided diagnosis (CAD) systems using eye fundus images. Images from the EyePACS dataset were used, and quality labels from previous works in the literature were selected. Cubical complexes were used to represent the images; the grayscale version was, then, used to calculate a persistent homology on the simplex and represented with persistence diagrams. Then, 30 vectorized topological descriptors were calculated from each image and used as input to a classification algorithm. Six different algorithms were tested for this study (SVM, decision tree, k-NN, random forest, logistic regression (LoGit), MLP). LoGit was selected and used for the classification of all images, given the low computational cost it carries. Performance results on the validation subset showed a global accuracy of 0.932, precision of 0.912 for label "quality" and 0.952 for label "no quality", recall of 0.932 for label "quality" and 0.912 for label "no quality", AUC of 0.980, F1 score of 0.932, and a Matthews correlation coefficient of 0.864. This work offers evidence for the use of topological methods for the process of quality assessment of eye fundus images, where a relatively small vector of characteristics (30 in this case) can enclose enough information for an algorithm to yield classification results useful in the clinical settings of a digital fundoscopy pipeline for CAD.
Collapse
Affiliation(s)
- Gener José Avilés-Rodríguez
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Juan Iván Nieto-Hipólito
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - María de los Ángeles Cosío-León
- Dirección de Investigación, Innovación y Posgrado, Universidad Politécnica de Pachuca, Carretera Ciudad Sahagún-Pachuca Km. 20, Ex-Hacienda de Santa Bárbara, Hidalgo 43830, Mexico;
| | - Gerardo Salvador Romo-Cárdenas
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Juan de Dios Sánchez-López
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Patricia Radilla-Chávez
- Escuela de Ciencias de la Salud, Universidad Autónoma de Baja California, Carretera Transpeninsular S/N, Valle Dorado, Ensenada 22890, Mexico;
| | - Mabel Vázquez-Briseño
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| |
Collapse
|
88
|
Nakahara K, Asaoka R, Tanito M, Shibata N, Mitsuhashi K, Fujino Y, Matsuura M, Inoue T, Azuma K, Obata R, Murata H. Deep learning-assisted (automatic) diagnosis of glaucoma using a smartphone. Br J Ophthalmol 2021; 106:587-592. [PMID: 34261663 DOI: 10.1136/bjophthalmol-2020-318107] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 01/07/2021] [Indexed: 11/04/2022]
Abstract
BACKGROUND/AIMS To validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone. METHODS A training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC). RESULTS The AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < -12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras. CONCLUSION The usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.
Collapse
Affiliation(s)
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan .,Seirei Christopher University, Shizuoka, Hamamatsu, Japan.,Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan.,The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Japan.,Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | | | | | - Yuri Fujino
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan.,Department of Ophthalmology, University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | - Masato Matsuura
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Tatsuya Inoue
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan.,Department of Ophthalmology and Microtechnology, Yokohama City University School of Medicine, Kanagawa, Japan
| | - Keiko Azuma
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Ryo Obata
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Hiroshi Murata
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| |
Collapse
|
89
|
Wu JH, Liu TYA, Hsu WT, Ho JHC, Lee CC. Performance and Limitation of Machine Learning Algorithms for Diabetic Retinopathy Screening: Meta-analysis. J Med Internet Res 2021; 23:e23863. [PMID: 34407500 PMCID: PMC8406115 DOI: 10.2196/23863] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 11/19/2020] [Accepted: 04/30/2021] [Indexed: 12/23/2022] Open
Abstract
Background Diabetic retinopathy (DR), whose standard diagnosis is performed by human experts, has high prevalence and requires a more efficient screening method. Although machine learning (ML)–based automated DR diagnosis has gained attention due to recent approval of IDx-DR, performance of this tool has not been examined systematically, and the best ML technique for use in a real-world setting has not been discussed. Objective The aim of this study was to systematically examine the overall diagnostic accuracy of ML in diagnosing DR of different categories based on color fundus photographs and to determine the state-of-the-art ML approach. Methods Published studies in PubMed and EMBASE were searched from inception to June 2020. Studies were screened for relevant outcomes, publication types, and data sufficiency, and a total of 60 out of 2128 (2.82%) studies were retrieved after study selection. Extraction of data was performed by 2 authors according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), and the quality assessment was performed according to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). Meta-analysis of diagnostic accuracy was pooled using a bivariate random effects model. The main outcomes included diagnostic accuracy, sensitivity, and specificity of ML in diagnosing DR based on color fundus photographs, as well as the performances of different major types of ML algorithms. Results The primary meta-analysis included 60 color fundus photograph studies (445,175 interpretations). Overall, ML demonstrated high accuracy in diagnosing DR of various categories, with a pooled area under the receiver operating characteristic (AUROC) ranging from 0.97 (95% CI 0.96-0.99) to 0.99 (95% CI 0.98-1.00). The performance of ML in detecting more-than-mild DR was robust (sensitivity 0.95; AUROC 0.97), and by subgroup analyses, we observed that robust performance of ML was not limited to benchmark data sets (sensitivity 0.92; AUROC 0.96) but could be generalized to images collected in clinical practice (sensitivity 0.97; AUROC 0.97). Neural network was the most widely used method, and the subgroup analysis revealed a pooled AUROC of 0.98 (95% CI 0.96-0.99) for studies that used neural networks to diagnose more-than-mild DR. Conclusions This meta-analysis demonstrated high diagnostic accuracy of ML algorithms in detecting DR on color fundus photographs, suggesting that state-of-the-art, ML-based DR screening algorithms are likely ready for clinical applications. However, a significant portion of the earlier published studies had methodology flaws, such as the lack of external validation and presence of spectrum bias. The results of these studies should be interpreted with caution.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| | - T Y Alvin Liu
- Retina Division, Wilmer Eye Institute, The Johns Hopkins Medicine, Baltimore, MD, United States
| | - Wan-Ting Hsu
- Harvard TH Chan School of Public Health, Boston, MA, United States
| | | | - Chien-Chang Lee
- Health Data Science Research Group, National Taiwan University Hospital, Taipei, Taiwan.,The Centre for Intelligent Healthcare, National Taiwan University Hospital, Taipei, Taiwan.,Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
90
|
Yellapragada B, Hornauer S, Snyder K, Yu S, Yiu G. Self-Supervised Feature Learning and Phenotyping for Assessing Age-Related Macular Degeneration Using Retinal Fundus Images. Ophthalmol Retina 2021; 6:116-129. [PMID: 34217854 DOI: 10.1016/j.oret.2021.06.010] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 06/24/2021] [Accepted: 06/25/2021] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Diseases such as age-related macular degeneration (AMD) are classified based on human rubrics that are prone to bias. Supervised neural networks trained using human-generated labels require labor-intensive annotations and are restricted to specific trained tasks. Here, we trained a self-supervised deep learning network using unlabeled fundus images, enabling data-driven feature classification of AMD severity and discovery of ocular phenotypes. DESIGN Development of a self-supervised training pipeline to evaluate fundus photographs from the Age-Related Eye Disease Study (AREDS). PARTICIPANTS One hundred thousand eight hundred forty-eight human-graded fundus images from 4757 AREDS participants between 55 and 80 years of age. METHODS We trained a deep neural network with self-supervised Non-Parametric Instance Discrimination (NPID) using AREDS fundus images without labels then evaluated its performance in grading AMD severity using 2-step, 4-step, and 9-step classification schemes using a supervised classifier. We compared balanced and unbalanced accuracies of NPID against supervised-trained networks and ophthalmologists, explored network behavior using hierarchical learning of image subsets and spherical k-means clustering of feature vectors, then searched for ocular features that can be identified without labels. MAIN OUTCOME MEASURES Accuracy and kappa statistics. RESULTS NPID demonstrated versatility across different AMD classification schemes without re-training and achieved balanced accuracies comparable with those of supervised-trained networks or human ophthalmologists in classifying advanced AMD (82% vs. 81-92% or 89%), referable AMD (87% vs. 90-92% or 96%), or on the 4-step AMD severity scale (65% vs. 63-75% or 67%), despite never directly using these labels during self-supervised feature learning. Drusen area drove network predictions on the 4-step scale, while depigmentation and geographic atrophy (GA) areas correlated with advanced AMD classes. Self-supervised learning revealed grader-mislabeled images and susceptibility of some classes within more granular AMD scales to misclassification by both ophthalmologists and neural networks. Importantly, self-supervised learning enabled data-driven discovery of AMD features such as GA and other ocular phenotypes of the choroid (e.g., tessellated or blonde fundi), vitreous (e.g., asteroid hyalosis), and lens (e.g., nuclear cataracts) that were not predefined by human labels. CONCLUSIONS Self-supervised learning enables AMD severity grading comparable with that of ophthalmologists and supervised networks, reveals biases of human-defined AMD classification systems, and allows unbiased, data-driven discovery of AMD and non-AMD ocular phenotypes.
Collapse
Affiliation(s)
- Baladitya Yellapragada
- Department of Vision Science, University of California, Berkeley, Berkeley, California; International Computer Science Institute, Berkeley, California; Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California
| | - Sascha Hornauer
- International Computer Science Institute, Berkeley, California
| | - Kiersten Snyder
- Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California
| | - Stella Yu
- Department of Vision Science, University of California, Berkeley, Berkeley, California; International Computer Science Institute, Berkeley, California
| | - Glenn Yiu
- Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California.
| |
Collapse
|
91
|
Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs. Eye (Lond) 2021; 36:1433-1441. [PMID: 34211137 DOI: 10.1038/s41433-021-01552-8] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 03/24/2021] [Accepted: 04/13/2021] [Indexed: 02/07/2023] Open
Abstract
OBJECTIVES To present and validate a deep ensemble algorithm to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) using retinal fundus images. METHODS A total of 8739 retinal fundus images were collected from a retrospective cohort of 3285 patients. For detecting DR and DMO, a multiple improved Inception-v4 ensembling approach was developed. We measured the algorithm's performance and made a comparison with that of human experts on our primary dataset, while its generalization was assessed on the publicly available Messidor-2 dataset. Also, we investigated systematically the impact of the size and number of input images used in training on model's performance, respectively. Further, the time budget of training/inference versus model performance was analyzed. RESULTS On our primary test dataset, the model achieved an 0.992 (95% CI, 0.989-0.995) AUC corresponding to 0.925 (95% CI, 0.916-0.936) sensitivity and 0.961 (95% CI, 0.950-0.972) specificity for referable DR, while the sensitivity and specificity for ophthalmologists ranged from 0.845 to 0.936, and from 0.912 to 0.971, respectively. For referable DMO, our model generated an AUC of 0.994 (95% CI, 0.992-0.996) with a 0.930 (95% CI, 0.919-0.941) sensitivity and 0.971 (95% CI, 0.965-0.978) specificity, whereas ophthalmologists obtained sensitivities ranging between 0.852 and 0.946, and specificities ranging between 0.926 and 0.985. CONCLUSION This study showed that the deep ensemble model exhibited excellent performance in detecting DR and DMO, and had good robustness and generalization, which could potentially help support and expand DR/DMO screening programs.
Collapse
|
92
|
Soans RS, Grillini A, Saxena R, Renken RJ, Gandhi TK, Cornelissen FW. Eye-Movement-Based Assessment of the Perceptual Consequences of Glaucomatous and Neuro-Ophthalmological Visual Field Defects. Transl Vis Sci Technol 2021; 10:1. [PMID: 34003886 PMCID: PMC7873497 DOI: 10.1167/tvst.10.2.1] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Assessing the presence of visual field defects (VFD) through procedures such as perimetry is an essential aspect of the management and diagnosis of ocular disorders. However, even the latest perimetric methods have shortcomings-a high cognitive demand and requiring prolonged stable fixation and feedback through a button response. Consequently, an approach using eye movements (EM)-as a natural response-has been proposed as an alternate way to evaluate the presence of VFD. This approach has given good results for computer-simulated VFD. However, its use in patients is not well documented yet. Here we use this new approach to quantify the spatiotemporal properties (STP) of EM of various patients suffering from glaucoma and neuro-ophthalmological VFD and controls. Methods In total, 15 glaucoma patients, 37 patients with a neuro-ophthalmological disorder, and 21 controls performed a visual tracking task while their EM were being recorded. Subsequently, the STP of EM were quantified using a cross-correlogram analysis. Decision trees were used to identify the relevant STP and classify the populations. Results We achieved a classification accuracy of 94.5% (TPR/sensitivity = 96%, TNR/specificity = 90%) between patients and controls. Individually, the algorithm achieved an accuracy of 86.3% (TPR for neuro-ophthalmology [97%], glaucoma [60%], and controls [86%]). The STP of EM were highly similar across two different control cohorts. Conclusions In an ocular tracking task, patients with VFD due to different underlying pathology make EM with distinctive STP. These properties are interpretable based on different clinical characteristics of patients and can be used for patient classification. Translational Relevance Our EM-based screening tool may complement existing perimetric techniques in clinical practice.
Collapse
Affiliation(s)
- Rijul Saurabh Soans
- Department of Electrical Engineering, Indian Institute of Technology - Delhi, New Delhi, India.,Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Alessandro Grillini
- Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Rohit Saxena
- Department of Ophthalmology, Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Remco J Renken
- Cognitive Neuroscience Center, Department of Biomedical Sciences of Cells and Systems, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Tapan Kumar Gandhi
- Department of Electrical Engineering, Indian Institute of Technology - Delhi, New Delhi, India
| | - Frans W Cornelissen
- Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, The Netherlands
| |
Collapse
|
93
|
Shi Z, Wang T, Huang Z, Xie F, Song G. A method for the automatic detection of myopia in Optos fundus images based on deep learning. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2021; 37:e3460. [PMID: 33773080 DOI: 10.1002/cnm.3460] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 03/08/2021] [Accepted: 03/20/2021] [Indexed: 06/12/2023]
Abstract
Myopia detection is significant for preventing irreversible visual impairment and diagnosing myopic retinopathy. To improve the detection efficiency and accuracy, a Myopia Detection Network (MDNet) that combines the advantages of dense connection and Residual Squeeze-and-Excitation attention is proposed in this paper to automatically detect myopia in Optos fundus images. First, an automatic optic disc recognition method is applied to extract the Regions of Interest and remove the noise disturbances; then, data augmentation techniques are implemented to enlarge the data set and prevent overfitting; moreover, an MDNet composed of Attention Dense blocks is constructed to detect myopia in Optos fundus images. The results show that the Mean Absolute Error of the Spherical Equivalent detected by this network can reach 1.1150 D (diopter), which verifies the feasibility and applicability of this method for the automatic detection of myopia in Optos fundus images.
Collapse
Affiliation(s)
- Zhengjin Shi
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Tianyu Wang
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Zheng Huang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Feng Xie
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| |
Collapse
|
94
|
Wang Y, Yu M, Hu B, Jin X, Li Y, Zhang X, Zhang Y, Gong D, Wu C, Zhang B, Yang J, Li B, Yuan M, Mo B, Wei Q, Zhao J, Ding D, Yang J, Li X, Yu W, Chen Y. Deep learning-based detection and stage grading for optimising diagnosis of diabetic retinopathy. Diabetes Metab Res Rev 2021; 37:e3445. [PMID: 33713564 DOI: 10.1002/dmrr.3445] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 11/07/2022]
Abstract
AIMS To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning-based lesion detection and stage grading. MATERIALS AND METHODS A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as an external test set. For automated referable DR identification, four deep learning models were programmed based on whether two factors were included: DR-related lesions and DR stages. Sensitivity, specificity and the area under the receiver operating characteristic curve (AUC) were reported for referable DR identification, while precision and recall were reported for lesion detection. RESULTS Adding lesion information to the five-stage grading model improved the AUC (0.943 vs. 0.938), sensitivity (90.6% vs. 90.5%) and specificity (80.7% vs. 78.5%) of the model for identifying referable DR in the internal test set. Adding stage information to the lesion-based model increased the AUC (0.943 vs. 0.936) and sensitivity (90.6% vs. 76.7%) of the model for identifying referable DR in the internal test set. Similar trends were also seen in the external test set. DR lesion types with high precision results were preretinal haemorrhage, hard exudate, vitreous haemorrhage, neovascularisation, cotton wool spots and fibrous proliferation. CONCLUSIONS The herein described automated model employed DR lesions and stage information to identify referable DR and displayed better diagnostic value than models built without this information.
Collapse
Affiliation(s)
- Yuelin Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Miao Yu
- Department of Endocrinology, Key Laboratory of Endocrinology, National Health Commission, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Bojie Hu
- Department of Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Xuemin Jin
- Department of Ophthalmology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yibin Li
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xiao Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Yongpeng Zhang
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Di Gong
- Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bilei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Mingzhen Yuan
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bin Mo
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qijie Wei
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Jianchun Zhao
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Dayong Ding
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Jingyun Yang
- Department of Neurological Sciences, Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, Illinois, USA
| | - Xirong Li
- Key Lab of Data Engineering and Knowledge Engineering, Renmin University of China, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
95
|
Li B, Chen H, Zhang B, Yuan M, Jin X, Lei B, Xu J, Gu W, Wong DCS, He X, Wang H, Ding D, Li X, Chen Y, Yu W. Development and evaluation of a deep learning model for the detection of multiple fundus diseases based on colour fundus photography. Br J Ophthalmol 2021; 106:1079-1086. [PMID: 33785508 DOI: 10.1136/bjophthalmol-2020-316290] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 01/24/2021] [Accepted: 02/16/2021] [Indexed: 12/24/2022]
Abstract
AIM To explore and evaluate an appropriate deep learning system (DLS) for the detection of 12 major fundus diseases using colour fundus photography. METHODS Diagnostic performance of a DLS was tested on the detection of normal fundus and 12 major fundus diseases including referable diabetic retinopathy, pathologic myopic retinal degeneration, retinal vein occlusion, retinitis pigmentosa, retinal detachment, wet and dry age-related macular degeneration, epiretinal membrane, macula hole, possible glaucomatous optic neuropathy, papilledema and optic nerve atrophy. The DLS was developed with 56 738 images and tested with 8176 images from one internal test set and two external test sets. The comparison with human doctors was also conducted. RESULTS The area under the receiver operating characteristic curves of the DLS on the internal test set and the two external test sets were 0.950 (95% CI 0.942 to 0.957) to 0.996 (95% CI 0.994 to 0.998), 0.931 (95% CI 0.923 to 0.939) to 1.000 (95% CI 0.999 to 1.000) and 0.934 (95% CI 0.929 to 0.938) to 1.000 (95% CI 0.999 to 1.000), with sensitivities of 80.4% (95% CI 79.1% to 81.6%) to 97.3% (95% CI 96.7% to 97.8%), 64.6% (95% CI 63.0% to 66.1%) to 100% (95% CI 100% to 100%) and 68.0% (95% CI 67.1% to 68.9%) to 100% (95% CI 100% to 100%), respectively, and specificities of 89.7% (95% CI 88.8% to 90.7%) to 98.1% (95%CI 97.7% to 98.6%), 78.7% (95% CI 77.4% to 80.0%) to 99.6% (95% CI 99.4% to 99.8%) and 88.1% (95% CI 87.4% to 88.7%) to 98.7% (95% CI 98.5% to 99.0%), respectively. When compared with human doctors, the DLS obtained a higher diagnostic sensitivity but lower specificity. CONCLUSION The proposed DLS is effective in diagnosing normal fundus and 12 major fundus diseases, and thus has much potential for fundus diseases screening in the real world.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Bilei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Mingzhen Yuan
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuemin Jin
- Department of Ophthalmology, Zhengzhou University First Affiliated Hospital, Zhengzhou, Henan, China
| | - Bo Lei
- Clinical Research Center, Henan Eye Institute, Henan Eye Hospital, Clinical Research Center, Henan Provincial People's Hospital, Zhengzhou, Henan, China
| | - Jie Xu
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wei Gu
- Department of Ophthalmology, Beijing Aier Intech Eye Hospital, Beijing, China
| | | | - Xixi He
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Hao Wang
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Dayong Ding
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Xirong Li
- Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China .,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China .,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| |
Collapse
|
96
|
Ishii K, Asaoka R, Omoto T, Mitaki S, Fujino Y, Murata H, Onoda K, Nagai A, Yamaguchi S, Obana A, Tanito M. Predicting intraocular pressure using systemic variables or fundus photography with deep learning in a health examination cohort. Sci Rep 2021; 11:3687. [PMID: 33574359 PMCID: PMC7878799 DOI: 10.1038/s41598-020-80839-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 12/21/2020] [Indexed: 12/17/2022] Open
Abstract
The purpose of the current study was to predict intraocular pressure (IOP) using color fundus photography with a deep learning (DL) model, or, systemic variables with a multivariate linear regression model (MLM), along with least absolute shrinkage and selection operator regression (LASSO), support vector machine (SVM), and Random Forest: (RF). Training dataset included 3883 examinations from 3883 eyes of 1945 subjects and testing dataset 289 examinations from 289 eyes from 146 subjects. With the training dataset, MLM was constructed to predict IOP using 35 systemic variables and 25 blood measurements. A DL model was developed to predict IOP from color fundus photographs. The prediction accuracy of each model was evaluated through the absolute error and the marginal R-squared (mR2), using the testing dataset. The mean absolute error with MLM was 2.29 mmHg, which was significantly smaller than that with DL (2.70 dB). The mR2 with MLM was 0.15, whereas that with DL was 0.0066. The mean absolute error (between 2.24 and 2.30 mmHg) and mR2 (between 0.11 and 0.15) with LASSO, SVM and RF were similar to or poorer than MLM. A DL model to predict IOP using color fundus photography proved far less accurate than MLM using systemic variables.
Collapse
Affiliation(s)
- Kaori Ishii
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan.
- Seirei Christopher University, Hamamatsu, Shizuoka, Japan.
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.
| | - Takashi Omoto
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Shingo Mitaki
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Yuri Fujino
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- Department of Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Hiroshi Murata
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Keiichi Onoda
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
- Faculty of Psychology, Outemon Gakuin University, Osaka, Japan
| | - Atsushi Nagai
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Shuhei Yamaguchi
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Akira Obana
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- Hamamatsu BioPhotonics Innovation Chair, Institute for Medical Photonics Research, Preeminent Medical Photonics Education & Research Center, Hamamatsu University School of Medicine, Hamamatsu, Shizuoka, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
| |
Collapse
|
97
|
Yu Y, Chen X, Zhu X, Zhang P, Hou Y, Zhang R, Wu C. Performance of Deep Transfer Learning for Detecting Abnormal Fundus Images. J Curr Ophthalmol 2021; 32:368-374. [PMID: 33553839 PMCID: PMC7861106 DOI: 10.4103/joco.joco_123_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 07/22/2020] [Accepted: 07/27/2020] [Indexed: 11/04/2022] Open
Abstract
Purpose To develop and validate a deep transfer learning (DTL) algorithm for detecting abnormalities in fundus images from non-mydriatic fundus photography examinations. Methods A total of 1295 fundus images were collected to develop and validate a DTL algorithm for detecting abnormal fundus images. After removing 366 poor images, the DTL model was developed using 929 (370 normal and 559 abnormal) fundus images. Data preprocessing was performed to normalize the images. The inception-ResNet-v2 architecture was applied to achieve transfer learning. We tested our model using a subset of the publicly available Messidor dataset (using 366 images) and evaluated the testing performance of the DTL model for detecting abnormal fundus images. Results In the internal validation dataset (n = 273 images), the area under the curve (AUC), sensitivity, accuracy, and specificity of DTL for correctly classified fundus images were 0.997%, 97.41%, 97.07%, and 96.82%, respectively. For the test dataset (n = 273 images), the AUC, sensitivity, accuracy, and specificity of the DTL for correctly classifying fundus images were 0.926%, 88.17%, 87.18%, and 86.67%, respectively. Conclusion DTL showed high sensitivity and specificity for detecting abnormal fundus-related diseases. Further research is necessary to improve this method and evaluate the applicability of DTL in community health-care centers.
Collapse
Affiliation(s)
- Yan Yu
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - Xiao Chen
- Optoelectronic Technology Research Center, Anhui Normal University, Wuhu, China
| | - XiangBing Zhu
- Optoelectronic Technology Research Center, Anhui Normal University, Wuhu, China
| | - PengFei Zhang
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - YinFen Hou
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - RongRong Zhang
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - ChangFan Wu
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| |
Collapse
|
98
|
Gunasekeran DV, Tham YC, Ting DSW, Tan GSW, Wong TY. Digital health during COVID-19: lessons from operationalising new models of care in ophthalmology. LANCET DIGITAL HEALTH 2021; 3:e124-e134. [PMID: 33509383 DOI: 10.1016/s2589-7500(20)30287-9] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 11/11/2020] [Accepted: 11/18/2020] [Indexed: 12/13/2022]
Abstract
The COVID-19 pandemic has resulted in massive disruptions within health care, both directly as a result of the infectious disease outbreak, and indirectly because of public health measures to mitigate against transmission. This disruption has caused rapid dynamic fluctuations in demand, capacity, and even contextual aspects of health care. Therefore, the traditional face-to-face patient-physician care model has had to be re-examined in many countries, with digital technology and new models of care being rapidly deployed to meet the various challenges of the pandemic. This Viewpoint highlights new models in ophthalmology that have adapted to incorporate digital health solutions such as telehealth, artificial intelligence decision support for triaging and clinical care, and home monitoring. These models can be operationalised for different clinical applications based on the technology, clinical need, demand from patients, and manpower availability, ranging from out-of-hospital models including the hub-and-spoke pre-hospital model, to front-line models such as the inflow funnel model and monitoring models such as the so-called lighthouse model for provider-led monitoring. Lessons learnt from operationalising these models for ophthalmology in the context of COVID-19 are discussed, along with their relevance for other specialty domains.
Collapse
Affiliation(s)
- Dinesh V Gunasekeran
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Duke-NUS Medical School, Singapore.
| |
Collapse
|
99
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
100
|
Mun Y, Kim J, Noh KJ, Lee S, Kim S, Yi S, Park KH, Yoo S, Chang DJ, Park SJ. An innovative strategy for standardized, structured, and interoperable results in ophthalmic examinations. BMC Med Inform Decis Mak 2021; 21:9. [PMID: 33407448 PMCID: PMC7789748 DOI: 10.1186/s12911-020-01370-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 12/09/2020] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Although ophthalmic devices have made remarkable progress and are widely used, most lack standardization of both image review and results reporting systems, making interoperability unachievable. We developed and validated new software for extracting, transforming, and storing information from report images produced by ophthalmic examination devices to generate standardized, structured, and interoperable information to assist ophthalmologists in eye clinics. RESULTS We selected report images derived from optical coherence tomography (OCT). The new software consists of three parts: (1) The Area Explorer, which determines whether the designated area in the configuration file contains numeric values or tomographic images; (2) The Value Reader, which converts images to text according to ophthalmic measurements; and (3) The Finding Classifier, which classifies pathologic findings from tomographic images included in the report. After assessment of Value Reader accuracy by human experts, all report images were converted and stored in a database. We applied the Value Reader, which achieved 99.67% accuracy, to a total of 433,175 OCT report images acquired in a single tertiary hospital from 07/04/2006 to 08/31/2019. The Finding Classifier provided pathologic findings (e.g., macular edema and subretinal fluid) and disease activity. Patient longitudinal data could be easily reviewed to document changes in measurements over time. The final results were loaded into a common data model (CDM), and the cropped tomographic images were loaded into the Picture Archive Communication System. CONCLUSIONS The newly developed software extracts valuable information from OCT images and may be extended to other types of report image files produced by medical devices. Furthermore, powerful databases such as the CDM may be implemented or augmented by adding the information captured through our program.
Collapse
Affiliation(s)
- Yongseok Mun
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea
| | - Jooyoung Kim
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea
| | - Kyoung Jin Noh
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul, Republic of Korea
| | - Seok Kim
- Healthcare ICT Research Center, Office of eHealth Research and Businesses, Seoul National University Bundang Hospital, 172, Dolma-ro, Bundang-gu, Seongnam-si, 13605, Gyunggi-do, Republic of Korea
| | - Soyoung Yi
- Healthcare ICT Research Center, Office of eHealth Research and Businesses, Seoul National University Bundang Hospital, 172, Dolma-ro, Bundang-gu, Seongnam-si, 13605, Gyunggi-do, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea
| | - Sooyoung Yoo
- Healthcare ICT Research Center, Office of eHealth Research and Businesses, Seoul National University Bundang Hospital, 172, Dolma-ro, Bundang-gu, Seongnam-si, 13605, Gyunggi-do, Republic of Korea
| | - Dong Jin Chang
- Department of Ophthalmology, College of medicine, The Catholic University of Korea, Yeouido St. Mary's Hospital, 10, 63-ro, Seoul, 07345, Yeongdeungpo-gu, Republic of Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea.
| |
Collapse
|