1
|
M T, Koti MS, B A N, V G, K P S, Mathivanan SK, Dalu GT. Lung cancer diagnosis based on weighted convolutional neural network using gene data expression. Sci Rep 2024; 14:3656. [PMID: 38351141 PMCID: PMC10864291 DOI: 10.1038/s41598-024-54124-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 02/08/2024] [Indexed: 02/16/2024] Open
Abstract
Lung cancer is thought to be a genetic disease with a variety of unknown origins. Globocan2020 report tells in 2020 new cancer cases identified was 19.3 million and nearly 10.0 million died owed to cancer. GLOBOCAN envisages that the cancer cases will raised to 28.4 million in 2040. This charge is superior to the combined rates of the former generally prevalent malignancies, like breast, colorectal, and prostate cancers. For attribute selection in previous work, the information gain model was applied. Then, for lung cancer prediction, multilayer perceptron, random subspace, and sequential minimal optimization (SMO) are used. However, the total number of parameters in a multilayer perceptron can become extremely large. This is inefficient because of the duplication in such high dimensions, and SMO can become ineffective due to its calculating method and maintaining a single threshold value for prediction. To avoid these difficulties, our research presented a novel technique including Z-score normalization, levy flight cuckoo search optimization, and a weighted convolutional neural network for predicting lung cancer. This result findings show that the proposed technique is effective in precision, recall, and accuracy for the Kent Ridge Bio-Medical Dataset Repository.
Collapse
Affiliation(s)
- Thangamani M
- Department of Computer Science and Engineering, Hindusthan Institute of Technology, Valley Campus, Pollachi Highway, Othakkalmandapam (Post), Coimbatore, Tamil Nadu, 641032, India
| | - Manjula Sanjay Koti
- Department of Master of Computer Applications, Dayananda Sagar Academy of Technology and Management, Bangalore, Karnataka, 560082, India
| | - Nagashree B A
- Department of Computer Science, School of Computing, Amrita Vishwa Vidyapeetham, Mysuru, 570026, India
| | - Geetha V
- Department of Computer Science, School of Applied Sciences, REVA University, Bangalore, 560064, India
| | - Shreyas K P
- Department of Computer Science and Applications, School of Computer Science and Applications, REVA University, Bangalore, 560064, India
| | | | - Gemmachis Teshite Dalu
- Department of Software Engineering, College of Computing and Informatics, Haramaya University, POB 138, Dire Dawa, Ethiopia.
| |
Collapse
|
2
|
Shoukat A, Akbar S, Hassan SA, Iqbal S, Mehmood A, Ilyas QM. Automatic Diagnosis of Glaucoma from Retinal Images Using Deep Learning Approach. Diagnostics (Basel) 2023; 13:diagnostics13101738. [PMID: 37238222 DOI: 10.3390/diagnostics13101738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/04/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
Glaucoma is characterized by increased intraocular pressure and damage to the optic nerve, which may result in irreversible blindness. The drastic effects of this disease can be avoided if it is detected at an early stage. However, the condition is frequently detected at an advanced stage in the elderly population. Therefore, early-stage detection may save patients from irreversible vision loss. The manual assessment of glaucoma by ophthalmologists includes various skill-oriented, costly, and time-consuming methods. Several techniques are in experimental stages to detect early-stage glaucoma, but a definite diagnostic technique remains elusive. We present an automatic method based on deep learning that can detect early-stage glaucoma with very high accuracy. The detection technique involves the identification of patterns from the retinal images that are often overlooked by clinicians. The proposed approach uses the gray channels of fundus images and applies the data augmentation technique to create a large dataset of versatile fundus images to train the convolutional neural network model. Using the ResNet-50 architecture, the proposed approach achieved excellent results for detecting glaucoma on the G1020, RIM-ONE, ORIGA, and DRISHTI-GS datasets. We obtained a detection accuracy of 98.48%, a sensitivity of 99.30%, a specificity of 96.52%, an AUC of 97%, and an F1-score of 98% by using the proposed model on the G1020 dataset. The proposed model may help clinicians to diagnose early-stage glaucoma with very high accuracy for timely interventions.
Collapse
Affiliation(s)
- Ayesha Shoukat
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Shahzad Akbar
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Syed Ale Hassan
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Sajid Iqbal
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Abid Mehmood
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| |
Collapse
|
3
|
Precision Medicine in Glaucoma: Artificial Intelligence, Biomarkers, Genetics and Redox State. Int J Mol Sci 2023; 24:ijms24032814. [PMID: 36769127 PMCID: PMC9917798 DOI: 10.3390/ijms24032814] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/07/2023] [Accepted: 01/18/2023] [Indexed: 02/05/2023] Open
Abstract
Glaucoma is a multifactorial neurodegenerative illness requiring early diagnosis and strict monitoring of the disease progression. Current exams for diagnosis and prognosis are based on clinical examination, intraocular pressure (IOP) measurements, visual field tests, and optical coherence tomography (OCT). In this scenario, there is a critical unmet demand for glaucoma-related biomarkers to enhance clinical testing for early diagnosis and tracking of the disease's development. The introduction of validated biomarkers would allow for prompt intervention in the clinic to help with prognosis prediction and treatment response monitoring. This review aims to report the latest acquisitions on biomarkers in glaucoma, from imaging analysis to genetics and metabolic markers.
Collapse
|
4
|
Sengar N, Joshi RC, Dutta MK, Burget R. EyeDeep-Net: a multi-class diagnosis of retinal diseases using deep neural network. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08249-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
5
|
Hung KH, Kao YC, Tang YH, Chen YT, Wang CH, Wang YC, Lee OKS. Application of a deep learning system in glaucoma screening and further classification with colour fundus photographs: a case control study. BMC Ophthalmol 2022; 22:483. [PMID: 36510171 PMCID: PMC9743575 DOI: 10.1186/s12886-022-02730-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 12/06/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND To verify efficacy of automatic screening and classification of glaucoma with deep learning system. METHODS A cross-sectional, retrospective study in a tertiary referral hospital. Patients with healthy optic disc, high-tension, or normal-tension glaucoma were enrolled. Complicated non-glaucomatous optic neuropathy was excluded. Colour and red-free fundus images were collected for development of DLS and comparison of their efficacy. The convolutional neural network with the pre-trained EfficientNet-b0 model was selected for machine learning. Glaucoma screening (Binary) and ternary classification with or without additional demographics (age, gender, high myopia) were evaluated, followed by creating confusion matrix and heatmaps. Area under receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score were viewed as main outcome measures. RESULTS Two hundred and twenty-two cases (421 eyes) were enrolled, with 1851 images in total (1207 normal and 644 glaucomatous disc). Train set and test set were comprised of 1539 and 312 images, respectively. If demographics were not provided, AUC, accuracy, precision, sensitivity, F1 score, and specificity of our deep learning system in eye-based glaucoma screening were 0.98, 0.91, 0.86, 0.86, 0.86, and 0.94 in test set. Same outcome measures in eye-based ternary classification without demographic data were 0.94, 0.87, 0.87, 0.87, 0.87, and 0.94 in our test set, respectively. Adding demographics has no significant impact on efficacy, but establishing a linkage between eyes and images is helpful for a better performance. Confusion matrix and heatmaps suggested that retinal lesions and quality of photographs could affect classification. Colour fundus images play a major role in glaucoma classification, compared to red-free fundus images. CONCLUSIONS Promising results with high AUC and specificity were shown in distinguishing normal optic nerve from glaucomatous fundus images and doing further classification.
Collapse
Affiliation(s)
- Kuo-Hsuan Hung
- grid.413801.f0000 0001 0711 0593Department of Ophthalmology, Chang-Gung Memorial Hospital, Linkou, No.5, Fu-Hsing St., Kuei Shan Hsiang, Tao Yuan Hsien, Taiwan ,grid.145695.a0000 0004 1798 0922Chang-Gung University College of Medicine, No.259 Wen-Hwa 1st Road, Kuei Shan Hsiang, Tao Yuan Hsien, Taiwan ,grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan
| | - Yu-Ching Kao
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Yu-Hsuan Tang
- grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan
| | - Yi-Ting Chen
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Chuen-Heng Wang
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Yu-Chen Wang
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Oscar Kuang-Sheng Lee
- grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan ,grid.260539.b0000 0001 2059 7017Stem Cell Research Centre, National Yang Ming Chiao Tung University, Taipei, Taiwan ,grid.411508.90000 0004 0572 9415Department of Orthopedics, China Medical University Hospital, Taichung, Taiwan
| |
Collapse
|
6
|
Primary Open-Angle Glaucoma Diagnosis From Optic Disc Photographs Using a Siamese Network. OPHTHALMOLOGY SCIENCE 2022; 2:100209. [PMID: 36531584 PMCID: PMC9754976 DOI: 10.1016/j.xops.2022.100209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/01/2022] [Accepted: 08/05/2022] [Indexed: 11/20/2022]
Abstract
Purpose Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. Although deep learning methods have been proposed to diagnose POAG, these methods all used a single image as input. Contrastingly, glaucoma specialists typically compare the follow-up image with the baseline image to diagnose incident glaucoma. To simulate this process, we proposed a Siamese neural network, POAGNet, to detect POAG from optic disc photographs. Design The POAGNet, an algorithm for glaucoma diagnosis, is developed using optic disc photographs. Participants The POAGNet was trained and evaluated on 2 data sets: (1) 37 339 optic disc photographs from 1636 Ocular Hypertension Treatment Study (OHTS) participants and (2) 3684 optic disc photographs from the Sequential fundus Images for Glaucoma (SIG) data set. Gold standard labels were obtained using reading center grades. Methods We proposed a Siamese network model, POAGNet, to simulate the clinical process of identifying POAG from optic disc photographs. The POAGNet consists of 2 side outputs for deep supervision and uses convolution to measure the similarity between 2 networks. Main Outcome Measures The main outcome measures are the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. Results In POAG diagnosis, extensive experiments show that POAGNet performed better than the best state-of-the-art model on the OHTS test set (area under the curve [AUC] 0.9587 versus 0.8750). It also outperformed the baseline models on the SIG test set (AUC 0.7518 versus 0.6434). To assess the transferability of POAGNet, we also validated the impact of cross-data set variability on our model. The model trained on OHTS achieved an AUC of 0.7490 on SIG, comparable to the previous model trained on the same data set. When using the combination of SIG and OHTS for training, our model achieved superior AUC to the single-data model (AUC 0.8165 versus 0.7518). These demonstrate the relative generalizability of POAGNet. Conclusions By simulating the clinical grading process, POAGNet demonstrated high accuracy in POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. The POAGNet is publicly available on https://github.com/bionlplab/poagnet.
Collapse
|
7
|
Cao J, You K, Zhou J, Xu M, Xu P, Wen L, Wang S, Jin K, Lou L, Wang Y, Ye J. A cascade eye diseases screening system with interpretability and expandability in ultra-wide field fundus images: A multicentre diagnostic accuracy study. EClinicalMedicine 2022; 53:101633. [PMID: 36110868 PMCID: PMC9468501 DOI: 10.1016/j.eclinm.2022.101633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/08/2022] [Accepted: 08/08/2022] [Indexed: 12/09/2022] Open
Abstract
BACKGROUND Clinical application of artificial intelligence is limited due to the lack of interpretability and expandability in complex clinical settings. We aimed to develop an eye diseases screening system with improved interpretability and expandability based on a lesion-level dissection and tested the clinical expandability and auxiliary ability of the system. METHODS The four-hierarchical interpretable eye diseases screening system (IEDSS) based on a novel structural pattern named lesion atlas was developed to identify 30 eye diseases and conditions using a total of 32,026 ultra-wide field images collected from the Second Affiliated Hospital of Zhejiang University, School of Medicine (SAHZU), the First Affiliated Hospital of University of Science and Technology of China (FAHUSTC), and the Affiliated People's Hospital of Ningbo University (APHNU) in China between November 1, 2016 to February 28, 2022. The performance of IEDSS was compared with ophthalmologists and classic models trained with image-level labels. We further evaluated IEDSS in two external datasets, and tested it in a real-world scenario and an extended dataset with new phenotypes beyond the training categories. The accuracy (ACC), F1 score and confusion matrix were calculated to assess the performance of IEDSS. FINDINGS IEDSS reached average ACCs (aACC) of 0·9781 (95%CI 0·9739-0·9824), 0·9660 (95%CI 0·9591-0·9730) and 0·9709 (95%CI 0·9655-0·9763), frequency-weighted average F1 scores of 0·9042 (95%CI 0·8957-0·9127), 0·8837 (95%CI 0·8714-0·8960) and 0·8874 (95%CI 0·8772-0·8972) in datasets of SAHZU, APHNU and FAHUSTC, respectively. IEDSS reached a higher aACC (0·9781, 95%CI 0·9739-0·9824) compared with a multi-class image-level model (0·9398, 95%CI 0·9329-0·9467), a classic multi-label image-level model (0·9278, 95%CI 0·9189-0·9366), a novel multi-label image-level model (0·9241, 95%CI 0·9151-0·9331) and a lesion-level model without Adaboost (0·9381, 95%CI 0·9299-0·9463). In the real-world scenario, the aACC of IEDSS (0·9872, 95%CI 0·9828-0·9915) was higher than that of the senior ophthalmologist (SO) (0·9413, 95%CI 0·9321-0·9504, p = 0·000) and the junior ophthalmologist (JO) (0·8846, 95%CI 0·8722-0·8971, p = 0·000). IEDSS remained strong performance (ACC = 0·8560, 95%CI 0·8252-0·8868) compared with JO (ACC = 0·784, 95%CI 0·7479-0·8201, p= 0·003) and SO (ACC = 0·8500, 95%CI 0·8187-0·8813, p = 0·789) in the extended dataset. INTERPRETATION IEDSS showed excellent and stable performance in identifying common eye conditions and conditions beyond the training categories. The transparency and expandability of IEDSS could tremendously increase the clinical application range and the practical clinical value of it. It would enhance the efficiency and reliability of clinical practice, especially in remote areas with a lack of experienced specialists. FUNDING National Natural Science Foundation Regional Innovation and Development Joint Fund (U20A20386), Key research and development program of Zhejiang Province (2019C03020), Clinical Medical Research Centre for Eye Diseases of Zhejiang Province (2021E50007).
Collapse
Affiliation(s)
- Jing Cao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Kun You
- Zhejiang Feitu Medical Imaging Co.,LTD, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Mingyu Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Peifang Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lei Wen
- The First Affiliated Hospital of University of Science and Technology of China, Hefei, Anhui, China
| | - Shengzhan Wang
- The Affiliated People's Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lixia Lou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Yao Wang
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
- Corresponding author at: No. 1 West Lake Avenue, Hangzhou, Zhejiang Province, China, 310009.
| |
Collapse
|
8
|
Morya AK, Janti SS, Sisodiya P, Tejaswini A, Prasad R, Mali KR, Gurnani B. Everything real about unreal artificial intelligence in diabetic retinopathy and in ocular pathologies. World J Diabetes 2022; 13:822-834. [PMID: 36311999 PMCID: PMC9606792 DOI: 10.4239/wjd.v13.i10.822] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 08/11/2022] [Accepted: 09/10/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial Intelligence is a multidisciplinary field with the aim of building platforms that can make machines act, perceive, reason intelligently and whose goal is to automate activities that presently require human intelligence. From the cornea to the retina, artificial intelligence (AI) is expected to help ophthalmologists diagnose and treat ocular diseases. In ophthalmology, computerized analytics are being viewed as efficient and more objective ways to interpret the series of images and come to a conclusion. AI can be used to diagnose and grade diabetic retinopathy, glaucoma, age-related macular degeneration, cataracts, IOL power calculation, retinopathy of prematurity and keratoconus. This review article intends to discuss various aspects of artificial intelligence in ophthalmology.
Collapse
Affiliation(s)
- Arvind Kumar Morya
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Siddharam S Janti
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Priya Sisodiya
- Department of Ophthalmology, Sadguru Netra Chikitsalaya, Chitrakoot 485001, Madhya Pradesh, India
| | - Antervedi Tejaswini
- Department of Ophthalmology, All India Institute of Medical Sciences Bibinagar, Hyderabad 508126, Telangana, India
| | - Rajendra Prasad
- Department of Ophthalmology, R P Eye Institute, New Delhi 110001, New Delhi, India
| | - Kalpana R Mali
- Department of Pharmacology, All India Institute of Medical Sciences, Bibinagar, Hyderabad 508126, Telangana, India
| | - Bharat Gurnani
- Department of Ophthalmology, Aravind Eye Hospital and Post Graduate Institute of Ophthalmology, Pondicherry 605007, Pondicherry, India
| |
Collapse
|
9
|
Sophia SSSJ, Diwakaran S. Hybrid muddy electric fish and grasshopper optimization algorithm (MEF-GOA) based CNN for detection and severity differentiation of glaucoma in retinal fundus image. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Glaucoma is an irreversible blindness that affects the people over the age of 40 years. Many approaches are proposed to detect glaucoma in image by dealing with its complex data. Redundancy is the major problem in medical image which could lead to increased false positive and false negative rates. This paper proposed a three-structure CNN optimized with Hybrid optimization approach for glaucoma detection and severity differentiation. The CNN structure is designed with three sub-groups to do attention prediction, segmentation and classification. The mathematical equation for Loss function is derived for the CNN structure with three hyper-parameters which is optimized with Hybrid approach. Hybrid optimization approach consist of Muddy Electric fish Optimization and Grass hopper optimization algorithm for exploration and exploitation processes. The proposed method is designed in a Matlab and validated with LAG and Rim-One database. The proposed method achieved accuracy greater than 95% and other metrics like F2 and AUC has reached 98% .
Collapse
Affiliation(s)
| | - S. Diwakaran
- Kalasalingam Academy of Research and Education, India
| |
Collapse
|
10
|
Lin M, Hou B, Liu L, Gordon M, Kass M, Wang F, Van Tassel SH, Peng Y. Automated diagnosing primary open-angle glaucoma from fundus image by simulating human's grading with deep learning. Sci Rep 2022; 12:14080. [PMID: 35982106 PMCID: PMC9388536 DOI: 10.1038/s41598-022-17753-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 07/30/2022] [Indexed: 11/09/2022] Open
Abstract
Primary open-angle glaucoma (POAG) is a leading cause of irreversible blindness worldwide. Although deep learning methods have been proposed to diagnose POAG, it remains challenging to develop a robust and explainable algorithm to automatically facilitate the downstream diagnostic tasks. In this study, we present an automated classification algorithm, GlaucomaNet, to identify POAG using variable fundus photographs from different populations and settings. GlaucomaNet consists of two convolutional neural networks to simulate the human grading process: learning the discriminative features and fusing the features for grading. We evaluated GlaucomaNet on two datasets: Ocular Hypertension Treatment Study (OHTS) participants and the Large-scale Attention-based Glaucoma (LAG) dataset. GlaucomaNet achieved the highest AUC of 0.904 and 0.997 for POAG diagnosis on OHTS and LAG datasets. An ensemble of network architectures further improved diagnostic accuracy. By simulating the human grading process, GlaucomaNet demonstrated high accuracy with increased transparency in POAG diagnosis (comprehensiveness scores of 97% and 36%). These methods also address two well-known challenges in the field: the need for increased image data diversity and relying heavily on perimetry for POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. GlaucomaNet is publicly available on https://github.com/bionlplab/GlaucomaNet .
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Bojian Hou
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Lei Liu
- Institute for Public Health, Washington University School of Medicine, St. Louis, MO, USA
| | - Mae Gordon
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Michael Kass
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| | | | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| |
Collapse
|
11
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
12
|
Classification of Glaucoma Based on Elephant-Herding Optimization Algorithm and Deep Belief Network. ELECTRONICS 2022. [DOI: 10.3390/electronics11111763] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
This study proposes a novel glaucoma identification system from fundus images through the deep belief network (DBN) optimized by the elephant-herding optimization (EHO) algorithm. Initially, the input image undergoes the preprocessing steps of noise removal and enhancement processes, followed by optical disc (OD) and optical cup (OC) segmentation and extraction of structural, intensity, and textural features. Most discriminative features are then selected using the ReliefF algorithm and passed to the DBN for classification into glaucomatous or normal. To enhance the classification rate of the DBN, the DBN parameters are fine-tuned by the EHO algorithm. The model has experimented on public and private datasets with 7280 images, which attained a maximum classification rate of 99.4%, 100% specificity, and 99.89% sensitivity. The 10-fold cross validation reduced the misclassification and attained 98.5% accuracy. Investigations proved the efficacy of the proposed method in avoiding bias, dataset variability, and reducing false positives compared to similar works of glaucoma classification. The proposed system can be tested on diverse datasets, aiding in the improved glaucoma diagnosis.
Collapse
|
13
|
Wang W, Zhou W, Ji J, Yang J, Guo W, Gong Z, Yi Y, Wang J. Deep sparse autoencoder integrated with three‐stage framework for glaucoma diagnosis. INT J INTELL SYST 2022. [DOI: 10.1002/int.22911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Wenle Wang
- School of Software Jiangxi Normal University Nanchang China
| | - Wei Zhou
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Jianhang Ji
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Jikun Yang
- Shenyang Aier Excellence Eye Hospital Co. Ltd. Shenyang China
| | - Wei Guo
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Zhaoxuan Gong
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Yugen Yi
- School of Software Jiangxi Normal University Nanchang China
| | - Jianzhong Wang
- College of Information Science and Technology Northeast Normal University Changchun China
| |
Collapse
|
14
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
15
|
Chaurasia AK, Greatbatch CJ, Hewitt AW. Diagnostic Accuracy of Artificial Intelligence in Glaucoma Screening and Clinical Practice. J Glaucoma 2022; 31:285-299. [PMID: 35302538 DOI: 10.1097/ijg.0000000000002015] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/26/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Artificial intelligence (AI) has been shown as a diagnostic tool for glaucoma detection through imaging modalities. However, these tools are yet to be deployed into clinical practice. This meta-analysis determined overall AI performance for glaucoma diagnosis and identified potential factors affecting their implementation. METHODS We searched databases (Embase, Medline, Web of Science, and Scopus) for studies that developed or investigated the use of AI for glaucoma detection using fundus and optical coherence tomography (OCT) images. A bivariate random-effects model was used to determine the summary estimates for diagnostic outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis of Diagnostic Test Accuracy (PRISMA-DTA) extension was followed, and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used for bias and applicability assessment. RESULTS Seventy-nine articles met inclusion criteria, with a subset of 66 containing adequate data for quantitative analysis. The pooled area under receiver operating characteristic curve across all studies for glaucoma detection was 96.3%, with a sensitivity of 92.0% (95% confidence interval: 89.0-94.0) and specificity of 94.0% (95% confidence interval: 92.0-95.0). The pooled area under receiver operating characteristic curve on fundus and OCT images was 96.2% and 96.0%, respectively. Mixed data set and external data validation had unsatisfactory diagnostic outcomes. CONCLUSION Although AI has the potential to revolutionize glaucoma care, this meta-analysis highlights that before such algorithms can be implemented into clinical care, a number of issues need to be addressed. With substantial heterogeneity across studies, many factors were found to affect the diagnostic performance. We recommend implementing a standard diagnostic protocol for grading, implementing external data validation, and analysis across different ethnicity groups.
Collapse
Affiliation(s)
- Abadh K Chaurasia
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Connor J Greatbatch
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Alex W Hewitt
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
16
|
Latif J, Tu S, Xiao C, Ur Rehman S, Imran A, Latif Y. ODGNet: a deep learning model for automated optic disc localization and glaucoma classification using fundus images. SN APPLIED SCIENCES 2022. [DOI: 10.1007/s42452-022-04984-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
AbstractGlaucoma is one of the prevalent causes of blindness in the modern world. It is a salient chronic eye disease that leads to irreversible vision loss. The impediments of glaucoma can be restricted if it is identified at primary stages. In this paper, a novel two-phase Optic Disk localization and Glaucoma Diagnosis Network (ODGNet) has been proposed. In the first phase, a visual saliency map incorporated with shallow CNN is used for effective OD localization from the fundus images. In the second phase, the transfer learning-based pre-trained models are used for glaucoma diagnosis. The transfer learning-based models such as AlexNet, ResNet, and VGGNet incorporated with saliency maps are evaluated on five public retinal datasets (ORIGA, HRF, DRIONS-DB, DR-HAGIS, and RIM-ONE) to differentiate between normal and glaucomatous images. This study’s experimental results demonstrate that the proposed ODGNet evaluated on ORIGA for glaucoma diagnosis is the most predictive model and achieve 95.75, 94.90, 94.75, and 97.85% of accuracy, specificity, sensitivity, and area under the curve, respectively. These results indicate that the proposed OD localization method based on the saliency map and shallow CNN is robust, accurate and saves the computational cost.
Collapse
|
17
|
Singh LK, Khanna M, Pooja. A novel multimodality based dual fusion integrated approach for efficient and early prediction of glaucoma. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103468] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
18
|
Al Rahhal MM, Bazi Y, Jomaa RM, AlShibli A, Alajlan N, Mekhalfi ML, Melgani F. COVID-19 Detection in CT/X-ray Imagery Using Vision Transformers. J Pers Med 2022; 12:jpm12020310. [PMID: 35207797 PMCID: PMC8876295 DOI: 10.3390/jpm12020310] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 02/14/2022] [Accepted: 02/15/2022] [Indexed: 12/02/2022] Open
Abstract
The steady spread of the 2019 Coronavirus disease has brought about human and economic losses, imposing a new lifestyle across the world. On this point, medical imaging tests such as computed tomography (CT) and X-ray have demonstrated a sound screening potential. Deep learning methodologies have evidenced superior image analysis capabilities with respect to prior handcrafted counterparts. In this paper, we propose a novel deep learning framework for Coronavirus detection using CT and X-ray images. In particular, a Vision Transformer architecture is adopted as a backbone in the proposed network, in which a Siamese encoder is utilized. The latter is composed of two branches: one for processing the original image and another for processing an augmented view of the original image. The input images are divided into patches and fed through the encoder. The proposed framework is evaluated on public CT and X-ray datasets. The proposed system confirms its superiority over state-of-the-art methods on CT and X-ray data in terms of accuracy, precision, recall, specificity, and F1 score. Furthermore, the proposed system also exhibits good robustness when a small portion of training data is allocated.
Collapse
Affiliation(s)
- Mohamad Mahmoud Al Rahhal
- Applied Computer Science Department, College of Applied Computer Science, King Saud University, Riyadh 11543, Saudi Arabia;
| | - Yakoub Bazi
- Computer Engineering Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia;
- Correspondence: ; Tel.: +966-101469629
| | - Rami M. Jomaa
- Computer Science Department, College of Computer and Cyber Sciences, University of Prince Mugrin, Medina 42241, Saudi Arabia;
| | - Ahmad AlShibli
- Computer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia; (A.A.); (M.L.M.)
| | - Naif Alajlan
- Computer Engineering Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia;
| | - Mohamed Lamine Mekhalfi
- Computer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia; (A.A.); (M.L.M.)
| | - Farid Melgani
- Department of Information Engineering and Computer Science, University of Trento, 38123 Trento, Italy;
| |
Collapse
|
19
|
Glaucoma Detection from Retinal Images Using Statistical and Textural Wavelet Features. J Digit Imaging 2021; 33:151-158. [PMID: 30756264 DOI: 10.1007/s10278-019-00189-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Glaucoma is a silent progressive eye disease that is among the leading causes of irreversible blindness. Early detection and proper treatment of glaucoma can limit severe vision impairments associated with advanced stages of the disease. Periodic automatic screening can help in the early detection of glaucoma while reducing the workload on expert ophthalmologists. In this work, a wavelet-based glaucoma detection algorithm is proposed for real-time screening systems. A combination of wavelet-based statistical and textural features computed from the detected optic disc region is used to determine whether a retinal image is healthy or glaucomatous. Two public datasets having different resolutions were considered in the performance analysis of the proposed algorithm. An accuracy of 96.7% and area under receiver operating curve (AUC) of 94.7% were achieved for the high-resolution dataset. Analysis of the wavelet-based statistical and textural features using three different methods showed their relevance for glaucoma detection. Furthermore, the proposed algorithm is shown to be suitable for real-time applications as it requires less than 3 s for processing the high-resolution retinal images.
Collapse
|
20
|
Noninvasive Machine Learning Screening Model for Dacryocystitis Based on Ocular Surface Indicators. J Craniofac Surg 2021; 33:e23-e28. [PMID: 34267140 DOI: 10.1097/scs.0000000000007863] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Dacryocystitis is an orbital disease that can be easily misdiagnosed. The most common diagnostic tools for dacryocystitis are computed tomography, lacrimal duct angiography, and lacrimal tract irrigation. Yet, those are invasive methods, which are not conducive to extensive screening. OBJECTIVE To explore the significance of ocular surface indicators and demographic data in the screening of dacryocystitis. MATERIALS AND METHODS Data were prospectively collected from 56 patients with dacryocystitis (56 eyes) and 56 healthy individuals. Collected indicators included demographic information (gender, age), ocular surface data of tear meniscus height, objective scatter index (OSI), and clinical diagnosis. The model features were screened out by machine learning to establish a dacryocystitis screening model. RESULTS Tear meniscus height, OSI_maximum Lyapunov exponent, basic OSI, median of OSI, mean of OSI, slope coefficient of OSI linear regression, coefficient of variation in OSI, interquartile range of OSI, and other 8 parameters were used as model parameters to establish a dacryocystitis screening model with an overall detection accuracy of 85.71%. CONCLUSIONS This new screening model that is based on ocular surface indicators provides a new option for noninvasive screening of dacryocystitis.
Collapse
|
21
|
Han X, Steven K, Qassim A, Marshall HN, Bean C, Tremeer M, An J, Siggs OM, Gharahkhani P, Craig JE, Hewitt AW, Trzaskowski M, MacGregor S. Automated AI labeling of optic nerve head enables insights into cross-ancestry glaucoma risk and genetic discovery in >280,000 images from UKB and CLSA. Am J Hum Genet 2021; 108:1204-1216. [PMID: 34077762 DOI: 10.1016/j.ajhg.2021.05.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Accepted: 05/10/2021] [Indexed: 02/06/2023] Open
Abstract
Cupping of the optic nerve head, a highly heritable trait, is a hallmark of glaucomatous optic neuropathy. Two key parameters are vertical cup-to-disc ratio (VCDR) and vertical disc diameter (VDD). However, manual assessment often suffers from poor accuracy and is time intensive. Here, we show convolutional neural network models can accurately estimate VCDR and VDD for 282,100 images from both UK Biobank and an independent study (Canadian Longitudinal Study on Aging), enabling cross-ancestry epidemiological studies and new genetic discovery for these optic nerve head parameters. Using the AI approach, we perform a systematic comparison of the distribution of VCDR and VDD and compare these with intraocular pressure and glaucoma diagnoses across various genetically determined ancestries, which provides an explanation for the high rates of normal tension glaucoma in East Asia. We then used the large number of AI gradings to conduct a more powerful genome-wide association study (GWAS) of optic nerve head parameters. Using the AI-based gradings increased estimates of heritability by ∼50% for VCDR and VDD. Our GWAS identified more than 200 loci associated with both VCDR and VDD (double the number of loci from previous studies) and uncovered dozens of biological pathways; many of the loci we discovered also confer risk for glaucoma.
Collapse
|
22
|
Zheng B, Jiang Q, Lu B, He K, Wu MN, Hao XL, Zhou HX, Zhu SJ, Yang WH. Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images. Transl Vis Sci Technol 2021; 10:20. [PMID: 34132760 PMCID: PMC8212443 DOI: 10.1167/tvst.10.7.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Purpose The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked. Methods A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image. Results The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively. Conclusions This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus. Translational Relevance This study will help the primary doctors to provide effective services to all ophthalmologic patients.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Qin Jiang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China.,College of Computer and Information, Hehai University, Nanjing, Jiangsu, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
23
|
Liu H, Li L, Wormstone IM, Qiao C, Zhang C, Liu P, Li S, Wang H, Mou D, Pang R, Yang D, Zangwill LM, Moghimi S, Hou H, Bowd C, Jiang L, Chen Y, Hu M, Xu Y, Kang H, Ji X, Chang R, Tham C, Cheung C, Ting DSW, Wong TY, Wang Z, Weinreb RN, Xu M, Wang N. Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs. JAMA Ophthalmol 2021; 137:1353-1360. [PMID: 31513266 DOI: 10.1001/jamaophthalmol.2019.3501] [Citation(s) in RCA: 145] [Impact Index Per Article: 48.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Importance A deep learning system (DLS) that could automatically detect glaucomatous optic neuropathy (GON) with high sensitivity and specificity could expedite screening for GON. Objective To establish a DLS for detection of GON using retinal fundus images and glaucoma diagnosis with convoluted neural networks (GD-CNN) that has the ability to be generalized across populations. Design, Setting, and Participants In this cross-sectional study, a DLS for the classification of GON was developed for automated classification of GON using retinal fundus images obtained from the Chinese Glaucoma Study Alliance, the Handan Eye Study, and online databases. The researchers selected 241 032 images were selected as the training data set. The images were entered into the databases on June 9, 2009, obtained on July 11, 2018, and analyses were performed on December 15, 2018. The generalization of the DLS was tested in several validation data sets, which allowed assessment of the DLS in a clinical setting without exclusions, testing against variable image quality based on fundus photographs obtained from websites, evaluation in a population-based study that reflects a natural distribution of patients with glaucoma within the cohort and an additive data set that has a diverse ethnic distribution. An online learning system was established to transfer the trained and validated DLS to generalize the results with fundus images from new sources. To better understand the DLS decision-making process, a prediction visualization test was performed that identified regions of the fundus images utilized by the DLS for diagnosis. Exposures Use of a deep learning system. Main Outcomes and Measures Area under the receiver operating characteristics curve (AUC), sensitivity and specificity for DLS with reference to professional graders. Results From a total of 274 413 fundus images initially obtained from CGSA, 269 601 images passed initial image quality review and were graded for GON. A total of 241 032 images (definite GON 29 865 [12.4%], probable GON 11 046 [4.6%], unlikely GON 200 121 [83%]) from 68 013 patients were selected using random sampling to train the GD-CNN model. Validation and evaluation of the GD-CNN model was assessed using the remaining 28 569 images from CGSA. The AUC of the GD-CNN model in primary local validation data sets was 0.996 (95% CI, 0.995-0.998), with sensitivity of 96.2% and specificity of 97.7%. The most common reason for both false-negative and false-positive grading by GD-CNN (51 of 119 [46.3%] and 191 of 588 [32.3%]) and manual grading (50 of 113 [44.2%] and 183 of 538 [34.0%]) was pathologic or high myopia. Conclusions and Relevance Application of GD-CNN to fundus images from different settings and varying image quality demonstrated a high sensitivity, specificity, and generalizability for detecting GON. These findings suggest that automated DLS could enhance current screening programs in a cost-effective and time-efficient manner.
Collapse
Affiliation(s)
- Hanruo Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Liu Li
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - I Michael Wormstone
- School of Biological Sciences, University of East Anglia, Norwich, United Kingdom
| | - Chunyan Qiao
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Chun Zhang
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
| | - Ping Liu
- Ophthalmology Hospital, First Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Shuning Li
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Huaizhou Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Dapeng Mou
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Ruiqi Pang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Diya Yang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Linda M Zangwill
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Sasan Moghimi
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Huiyuan Hou
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Christopher Bowd
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Lai Jiang
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Yihan Chen
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Man Hu
- Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Yongli Xu
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Hong Kang
- College of Computer Science,Nankai University, Tianjin, China
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd, Beijing, China
| | - Robert Chang
- Department of Ophthalmology, Byers Eye Institute at Stanford University, Palo Alto, California
| | - Clement Tham
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Kowloon, Hong Kong, China
| | - Carol Cheung
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Kowloon, Hong Kong, China
| | | | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
| | - Zulin Wang
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Robert N Weinreb
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Mai Xu
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Ningli Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| |
Collapse
|
24
|
He X, Deng Y, Fang L, Peng Q. Multi-Modal Retinal Image Classification With Modality-Specific Attention Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1591-1602. [PMID: 33625978 DOI: 10.1109/tmi.2021.3059956] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, automatic diagnostic approaches have been widely used to classify ocular diseases. Most of these approaches are based on a single imaging modality (e.g., fundus photography or optical coherence tomography (OCT)), which usually only reflect the oculopathy to a certain extent, and neglect the modality-specific information among different imaging modalities. This paper proposes a novel modality-specific attention network (MSAN) for multi-modal retinal image classification, which can effectively utilize the modality-specific diagnostic features from fundus and OCT images. The MSAN comprises two attention modules to extract the modality-specific features from fundus and OCT images, respectively. Specifically, for the fundus image, ophthalmologists need to observe local and global pathologies at multiple scales (e.g., from microaneurysms at the micrometer level, optic disc at millimeter level to blood vessels through the whole eye). Therefore, we propose a multi-scale attention module to extract both the local and global features from fundus images. Moreover, large background regions exist in the OCT image, which is meaningless for diagnosis. Thus, a region-guided attention module is proposed to encode the retinal layer-related features and ignore the background in OCT images. Finally, we fuse the modality-specific features to form a multi-modal feature and train the multi-modal retinal image classification network. The fusion of modality-specific features allows the model to combine the advantages of fundus and OCT modality for a more accurate diagnosis. Experimental results on a clinically acquired multi-modal retinal image (fundus and OCT) dataset demonstrate that our MSAN outperforms other well-known single-modal and multi-modal retinal image classification methods.
Collapse
|
25
|
Feasibility of atrial fibrillation detection from a novel wearable armband device. CARDIOVASCULAR DIGITAL HEALTH JOURNAL 2021; 2:179-191. [PMID: 35265907 PMCID: PMC8890073 DOI: 10.1016/j.cvdhj.2021.05.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
26
|
de Sales Carvalho NR, da Conceição Leal Carvalho Rodrigues M, de Carvalho Filho AO, Mathew MJ. Automatic method for glaucoma diagnosis using a three-dimensional convoluted neural network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.146] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
27
|
Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, Sim DA, Thomas PBM, Lin H, Chen Y, Sakomoto T, Loewenstein A, Lam DSC, Pasquale LR, Wong TY, Lam LA, Ting DSW. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog Retin Eye Res 2021; 82:100900. [PMID: 32898686 PMCID: PMC7474840 DOI: 10.1016/j.preteyeres.2020.100900] [Citation(s) in RCA: 189] [Impact Index Per Article: 63.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/29/2022]
Abstract
The simultaneous maturation of multiple digital and telecommunications technologies in 2020 has created an unprecedented opportunity for ophthalmology to adapt to new models of care using tele-health supported by digital innovations. These digital innovations include artificial intelligence (AI), 5th generation (5G) telecommunication networks and the Internet of Things (IoT), creating an inter-dependent ecosystem offering opportunities to develop new models of eye care addressing the challenges of COVID-19 and beyond. Ophthalmology has thrived in some of these areas partly due to its many image-based investigations. Tele-health and AI provide synchronous solutions to challenges facing ophthalmologists and healthcare providers worldwide. This article reviews how countries across the world have utilised these digital innovations to tackle diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, glaucoma, refractive error correction, cataract and other anterior segment disorders. The review summarises the digital strategies that countries are developing and discusses technologies that may increasingly enter the clinical workflow and processes of ophthalmologists. Furthermore as countries around the world have initiated a series of escalating containment and mitigation measures during the COVID-19 pandemic, the delivery of eye care services globally has been significantly impacted. As ophthalmic services adapt and form a "new normal", the rapid adoption of some of telehealth and digital innovation during the pandemic is also discussed. Finally, challenges for validation and clinical implementation are considered, as well as recommendations on future directions.
Collapse
Affiliation(s)
- Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Hanruo Liu
- Beijing Tongren Hospital; Capital Medical University; Beijing Institute of Ophthalmology; Beijing, China
| | - Darren S J Ting
- Academic Ophthalmology, University of Nottingham, United Kingdom
| | - Sohee Jeon
- Keye Eye Center, Seoul, Republic of Korea
| | | | - Judy E Kim
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Haotian Lin
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Guangzhou, China
| | - Youxin Chen
- Peking Union Medical College Hospital, Beijing, China
| | - Taiji Sakomoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Japan
| | | | - Dennis S C Lam
- C-MER Dennis Lam Eye Center, C-Mer International Eye Care Group Limited, Hong Kong, Hong Kong; International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Tien Y Wong
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore
| | - Linda A Lam
- USC Roski Eye Institute, University of Southern California (USC) Keck School of Medicine, Los Angeles, CA, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore.
| |
Collapse
|
28
|
Zhang D, Liu X, Shao M, Sun Y, Lian Q, Zhang H. The value of artificial intelligence and imaging diagnosis in the fight against COVID-19. PERSONAL AND UBIQUITOUS COMPUTING 2021; 27:783-792. [PMID: 33564287 PMCID: PMC7861001 DOI: 10.1007/s00779-021-01522-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Accepted: 01/07/2021] [Indexed: 05/27/2023]
Abstract
The outbreak of the new type of coronavirus pneumonia (COVID-19) has caused a huge impact on the world. In this case, only by adhering to the prevention and control methods of early diagnosis, early isolation, and early treatment, can the spread of the virus be prevented to the greatest extent. This article uses artificial intelligence-assisted medical imaging diagnosis as the research object, combines artificial intelligence and CT medical imaging diagnosis, introduces an intelligent COVID-19 detection system, and uses it to achieve COVID-19 disease screening and lesion evaluation. CT examination has the advantages of fast speed and high accuracy, which can provide a favorable basis for clinical diagnosis. This article collected 32 lung CT scan images of patients with confirmed COVID-19. Two professional radiologists analyzed the CT images using traditional imaging diagnostic methods and artificial intelligence-assisted imaging diagnostic methods, and the comparison showed the gap between the two methods. According to experiments, CT imaging diagnosis assisted by artificial intelligence only takes 0.744 min on average, which can save a lot of time and cost compared with the average time of 3.623 min for conventional diagnosis. In terms of comprehensive test accuracy, it can be concluded that the combination of artificial intelligence and imaging diagnosis has extremely high application value in COVID-19 diagnosis.
Collapse
Affiliation(s)
- Dandan Zhang
- Department of Medical Imaging, Henan Provincial People’s Hospital, Zhengzhou, 450003 Henan China
| | - Xiaoya Liu
- Department of Cerebrovascular Surgery, Henan Provincial People’s Hospital, Zhengzhou, 450003 Henan China
| | - Mingyue Shao
- Department of Medical Imaging, Henan Provincial People’s Hospital, Zhengzhou, 450003 Henan China
| | - Yaping Sun
- Department of Medical Imaging, Henan Provincial People’s Hospital, Zhengzhou, 450003 Henan China
| | - Qingyuan Lian
- Department of Medical Imaging, Henan Provincial People’s Hospital, Zhengzhou, 450003 Henan China
| | - Hongmei Zhang
- Department of Nursing, Henan Provincial People’s Hospital, Zhengzhou, 450003 Henan China
| |
Collapse
|
29
|
Bashar SK, Han D, Zieneddin F, Ding E, Fitzgibbons TP, Walkey AJ, McManus DD, Javidi B, Chon KH. Novel Density Poincaré Plot Based Machine Learning Method to Detect Atrial Fibrillation From Premature Atrial/Ventricular Contractions. IEEE Trans Biomed Eng 2021; 68:448-460. [PMID: 32746035 DOI: 10.1109/tbme.2020.3004310] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Detection of Atrial fibrillation (AF) from premature atrial contraction (PAC) and premature ventricular contraction (PVC) is difficult as frequent occurrences of these ectopic beats can mimic the typical irregular patterns of AF. In this paper, we present a novel density Poincaré plot-based machine learning method to detect AF from PAC/PVCs using electrocardiogram (ECG) recordings. METHODS First, we propose the generation of this new density Poincaré plot which is derived from the difference of the heart rate (DHR) and provides the overlapping phase-space trajectory information of the DHR. Next, from this density Poincaré plot, several image processing domain-based approaches including statistical central moments, template correlation, Zernike moment, discrete wavelet transform and Hough transform features are used to extract suitable features. Subsequently, the infinite latent feature selection algorithm is implemented to rank the features. Finally, classification of AF vs. PAC/PVC is performed using K-Nearest Neighbor, Support vector machine (SVM) and Random Forest (RF) classifiers. Our method is developed and validated using a subset of Medical Information Mart for Intensive Care (MIMIC) III database containing 10 AF and 10 PAC/PVC subjects. Results- During the segment-wise 10-fold cross-validation, SVM achieved the best performance with 98.99% sensitivity, 95.18% specificity and 97.45% accuracy with the extracted features. In subject-wise scenario, RF achieved the highest accuracy of 91.93%. Moreover, we further validated the proposed method using two other databases: wearable armband ECG data and the Physionet AFPDB. 100% PAC detection accuracy was obtained for both databases without any further training. CONCLUSION Our proposed density Poincaré plot-based method showed superior performance when compared with four existing algorithms; thus showing the efficacy of the extracted image domain-based features. SIGNIFICANCE From intensive care unit's ECG to wearable armband ECGs, the proposed method is shown to discriminate PAC/PVCs from AF with high accuracy.
Collapse
|
30
|
Automated segmentation and classifcation of retinal features for glaucoma diagnosis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102244] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
31
|
Classification of glaucoma using hybrid features with machine learning approaches. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102137] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
32
|
A Review on the optic disc and optic cup segmentation and classification approaches over retinal fundus images for detection of glaucoma. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-03221-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
|
33
|
Kishore B, Ananthamoorthy NP. Glaucoma classification based on intra-class and extra-class discriminative correlation and consensus ensemble classifier. Genomics 2020; 112:3089-3096. [PMID: 32470644 DOI: 10.1016/j.ygeno.2020.05.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2019] [Accepted: 05/19/2020] [Indexed: 10/24/2022]
Abstract
Automatic classification of glaucoma from fundus images is a vital diagnostic tool for Computer-Aided Diagnosis System (CAD). In this work, a novel fused feature extraction technique and ensemble classifier fusion is proposed for diagnosis of glaucoma. The proposed method comprises of three stages. Initially, the fundus images are subjected to preprocessing followed by feature extraction and feature fusion by Intra-Class and Extra-Class Discriminative Correlation Analysis (IEDCA). The feature fusion approach eliminates between-class correlation while retaining sufficient Feature Dimension (FD) for Correlation Analysis (CA). The fused features are then fed to the classifiers namely Support Vector Machine (SVM), Random Forest (RF) and K-Nearest Neighbor (KNN) for classification individually. Finally, Classifier fusion is also designed which combines the decision of the ensemble of classifiers based on Consensus-based Combining Method (CCM). CCM based Classifier fusion adjusts the weights iteratively after comparing the outputs of all the classifiers. The proposed fusion classifier provides a better improvement in accuracy and convergence when compared to the individual algorithms. A classification accuracy of 99.2% is accomplished by the two-level hybrid fusion approach. The method is evaluated on the public datasets High Resolution Fundus (HRF) and DRIVE datasets with cross dataset validation.
Collapse
|
34
|
Martins J, Cardoso JS, Soares F. Offline computer-aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 192:105341. [PMID: 32155534 DOI: 10.1016/j.cmpb.2020.105341] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 12/14/2019] [Accepted: 01/14/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Glaucoma, an eye condition that leads to permanent blindness, is typically asymptomatic and therefore difficult to be diagnosed in time. However, if diagnosed in time, Glaucoma can effectively be slowed down by using adequate treatment; hence, an early diagnosis is of utmost importance. Nonetheless, the conventional approaches to diagnose Glaucoma adopt expensive and bulky equipment that requires qualified experts, making it difficult, costly and time-consuming to diagnose large amounts of people. Consequently, new alternatives to diagnose Glaucoma that suppress these issues should be explored. METHODS This work proposes an interpretable computer-aided diagnosis (CAD) pipeline that is capable of diagnosing Glaucoma using fundus images and run offline in mobile devices. Several public datasets of fundus images were merged and used to build Convolutional Neural Networks (CNNs) that perform segmentation and classification tasks. These networks are then used to build a pipeline for Glaucoma assessment that outputs a Glaucoma confidence level and also provides several morphological features and segmentations of relevant structures, resulting in an interpretable Glaucoma diagnosis. To assess the performance of this method in a restricted environment, this pipeline was integrated into a mobile application and time and space complexities were assessed. RESULTS Considering the test set, the developed pipeline achieved 0.91 and 0.75 of Intersection over Union (IoU) in the optic disc and optic cup segmentation, respectively. With regards to the classification, an accuracy of 0.87 with a sensitivity of 0.85 and an AUC of 0.93 were attained. Moreover, this pipeline runs on an average Android smartphone in under two seconds. CONCLUSIONS The results demonstrate the potential that this method can have in the contribution to an early Glaucoma diagnosis. The proposed approach achieved similar or slightly better metrics than the current CAD systems for Glaucoma assessment while running on more restricted devices. This pipeline can, therefore, be used to construct accurate and affordable CAD systems that could enable large Glaucoma screenings, contributing to an earlier diagnose of this condition.
Collapse
Affiliation(s)
- José Martins
- Fraunhofer Portugal AICOS, Rua Alfredo Allen 455/461, Porto 4200-135, Portugal
| | - Jaime S Cardoso
- INESC TEC and Faculty of Engineering of the University of Porto, Portugal
| | - Filipe Soares
- Fraunhofer Portugal AICOS, Rua Alfredo Allen 455/461, Porto 4200-135, Portugal.
| |
Collapse
|
35
|
Barros DMS, Moura JCC, Freire CR, Taleb AC, Valentim RAM, Morais PSG. Machine learning applied to retinal image processing for glaucoma detection: review and perspective. Biomed Eng Online 2020; 19:20. [PMID: 32293466 PMCID: PMC7160894 DOI: 10.1186/s12938-020-00767-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 04/06/2020] [Indexed: 02/07/2023] Open
Abstract
INTRODUCTION This is a systematic review on the main algorithms using machine learning (ML) in retinal image processing for glaucoma diagnosis and detection. ML has proven to be a significant tool for the development of computer aided technology. Furthermore, secondary research has been widely conducted over the years for ophthalmologists. Such aspects indicate the importance of ML in the context of retinal image processing. METHODS The publications that were chosen to compose this review were gathered from Scopus, PubMed, IEEEXplore and Science Direct databases. Then, the papers published between 2014 and 2019 were selected . Researches that used the segmented optic disc method were excluded. Moreover, only the methods which applied the classification process were considered. The systematic analysis was performed in such studies and, thereupon, the results were summarized. DISCUSSION Based on architectures used for ML in retinal image processing, some studies applied feature extraction and dimensionality reduction to detect and isolate important parts of the analyzed image. Differently, other works utilized a deep convolutional network. Based on the evaluated researches, the main difference between the architectures is the number of images demanded for processing and the high computational cost required to use deep learning techniques. CONCLUSIONS All the analyzed publications indicated it was possible to develop an automated system for glaucoma diagnosis. The disease severity and its high occurrence rates justify the researches which have been carried out. Recent computational techniques, such as deep learning, have shown to be promising technologies in fundus imaging. Although such a technique requires an extensive database and high computational costs, the studies show that the data augmentation and transfer learning techniques have been applied as an alternative way to optimize and reduce networks training.
Collapse
Affiliation(s)
- Daniele M S Barros
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil.
| | - Julio C C Moura
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Cefas R Freire
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | | | - Ricardo A M Valentim
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Philippi S G Morais
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| |
Collapse
|
36
|
Cao BF, Li JQ, Qiao NS. Nickel foam surface defect detection based on spatial-frequency multi-scale MB-LBP. Soft comput 2020. [DOI: 10.1007/s00500-019-04513-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
37
|
Abstract
Artificial intelligence is advancing rapidly and making its way into all areas of our lives. This review discusses developments and potential practices regarding the use of artificial intelligence in the field of ophthalmology, and the related topic of medical ethics. Various artificial intelligence applications related to the diagnosis of eye diseases were researched in books, journals, search engines, print and social media. Resources were cross-checked to verify the information. Artificial intelligence algorithms, some of which were approved by the US Food and Drug Administration, have been adopted in the field of ophthalmology, especially in diagnostic studies. Studies are being conducted that prove that artificial intelligence algorithms can be used in the field of ophthalmology, especially in diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity. Some of these algorithms have come to the approval stage. The current point in artificial intelligence studies shows that this technology has advanced considerably and shows promise for future work. It is believed that artificial intelligence applications will be effective in identifying patients with preventable vision loss and directing them to physicians, especially in developing countries where there are fewer trained professionals and physicians are difficult to reach. When we consider the possibility that some future artificial intelligence systems may be candidates for moral/ethical status, certain ethical issues arise. Questions about moral/ethical status are important in some areas of applied ethics. Although it is accepted that current intelligence systems do not have moral/ethical status, it has yet to be determined what the exact the characteristics that confer moral/ethical status are or will be.
Collapse
Affiliation(s)
- Kadircan Keskinbora
- Bahçeşehir University Faculty of Medicine, Department of Ophthalmology, Division of Medical Ethics and History of Medicine, İstanbul, Turkey
| | - Fatih Güven
- Health Sciences University Bakırköy Training and Research Hospital, Clinic of Ophthalmology, İstanbul, Turkey
| |
Collapse
|
38
|
Li L, Xu M, Liu H, Li Y, Wang X, Jiang L, Wang Z, Fan X, Wang N. A Large-Scale Database and a CNN Model for Attention-Based Glaucoma Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:413-424. [PMID: 31283476 DOI: 10.1109/tmi.2019.2927226] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Glaucoma is one of the leading causes of irreversible vision loss. Many approaches have recently been proposed for automatic glaucoma detection based on fundus images. However, none of the existing approaches can efficiently remove high redundancy in fundus images for glaucoma detection, which may reduce the reliability and accuracy of glaucoma detection. To avoid this disadvantage, this paper proposes an attention-based convolutional neural network (CNN) for glaucoma detection, called AG-CNN. Specifically, we first establish a large-scale attention-based glaucoma (LAG) database, which includes 11 760 fundus images labeled as either positive glaucoma (4878) or negative glaucoma (6882). Among the 11 760 fundus images, the attention maps of 5824 images are further obtained from ophthalmologists through a simulated eye-tracking experiment. Then, a new structure of AG-CNN is designed, including an attention prediction subnet, a pathological area localization subnet, and a glaucoma classification subnet. The attention maps are predicted in the attention prediction subnet to highlight the salient regions for glaucoma detection, under a weakly supervised training manner. In contrast to other attention-based CNN methods, the features are also visualized as the localized pathological area, which are further added in our AG-CNN structure to enhance the glaucoma detection performance. Finally, the experiment results from testing over our LAG database and another public glaucoma database show that the proposed AG-CNN approach significantly advances the state-of-the-art in glaucoma detection.
Collapse
|
39
|
Murtagh P, Greene G, O'Brien C. Current applications of machine learning in the screening and diagnosis of glaucoma: a systematic review and Meta-analysis. Int J Ophthalmol 2020; 13:149-162. [PMID: 31956584 DOI: 10.18240/ijo.2020.01.22] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Accepted: 09/23/2019] [Indexed: 12/22/2022] Open
Abstract
AIM To compare the effectiveness of two well described machine learning modalities, ocular coherence tomography (OCT) and fundal photography, in terms of diagnostic accuracy in the screening and diagnosis of glaucoma. METHODS A systematic search of Embase and PubMed databases was undertaken up to 1st of February 2019. Articles were identified alongside their reference lists and relevant studies were aggregated. A Meta-analysis of diagnostic accuracy in terms of area under the receiver operating curve (AUROC) was performed. For the studies which did not report an AUROC, reported sensitivity and specificity values were combined to create a summary ROC curve which was included in the Meta-analysis. RESULTS A total of 23 studies were deemed suitable for inclusion in the Meta-analysis. This included 10 papers from the OCT cohort and 13 from the fundal photos cohort. Random effects Meta-analysis gave a pooled AUROC of 0.957 (95%CI=0.917 to 0.997) for fundal photos and 0.923 (95%CI=0.889 to 0.957) for the OCT cohort. The slightly higher accuracy of fundal photos methods is likely attributable to the much larger database of images used to train the models (59 788 vs 1743). CONCLUSION No demonstrable difference is shown between the diagnostic accuracy of the two modalities. The ease of access and lower cost associated with fundal photo acquisition make that the more appealing option in terms of screening on a global scale, however further studies need to be undertaken, owing largely to the poor study quality associated with the fundal photography cohort.
Collapse
Affiliation(s)
- Patrick Murtagh
- Department of Ophthalmology, Mater Misericordiae University Hospital, Eccles Street, Dublin D07 R2WY, Ireland
| | - Garrett Greene
- RCSI Education and Research Centre, Beaumont Hospital, Dublin D05 AT88, Ireland
| | - Colm O'Brien
- Department of Ophthalmology, Mater Misericordiae University Hospital, Eccles Street, Dublin D07 R2WY, Ireland
| |
Collapse
|
40
|
Son J, Shin JY, Kim HD, Jung KH, Park KH, Park SJ. Development and Validation of Deep Learning Models for Screening Multiple Abnormal Findings in Retinal Fundus Images. Ophthalmology 2020; 127:85-94. [DOI: 10.1016/j.ophtha.2019.05.029] [Citation(s) in RCA: 102] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 05/03/2019] [Accepted: 05/24/2019] [Indexed: 12/25/2022] Open
|
41
|
Zou B, Chen C, Zhao R, Ouyang P, Zhu C, Chen Q, Duan X. A novel glaucomatous representation method based on Radon and wavelet transform. BMC Bioinformatics 2019; 20:693. [PMID: 31874641 PMCID: PMC6929399 DOI: 10.1186/s12859-019-3267-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Background Glaucoma is an irreversible eye disease caused by the optic nerve injury. Therefore, it usually changes the structure of the optic nerve head (ONH). Clinically, ONH assessment based on fundus image is one of the most useful way for glaucoma detection. However, the effective representation for ONH assessment is a challenging task because its structural changes result in the complex and mixed visual patterns. Method We proposed a novel feature representation based on Radon and Wavelet transform to capture these visual patterns. Firstly, Radon transform (RT) is used to map the fundus image into Radon domain, in which the spatial radial variations of ONH are converted to a discrete signal for the description of image structural features. Secondly, the discrete wavelet transform (DWT) is utilized to capture differences and get quantitative representation. Finally, principal component analysis (PCA) and support vector machine (SVM) are used for dimensionality reduction and glaucoma detection. Results The proposed method achieves the state-of-the-art detection performance on RIMONE-r2 dataset with the accuracy and area under the curve (AUC) at 0.861 and 0.906, respectively. Conclusion In conclusion, we showed that the proposed method has the capacity as an effective tool for large-scale glaucoma screening, and it can provide a reference for the clinical diagnosis on glaucoma.
Collapse
Affiliation(s)
- Beiji Zou
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China
| | - Changlong Chen
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China
| | - Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China. .,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China.
| | - Pingbo Ouyang
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,The Second Xiangya Hospital of Central South University, Changsha, 410011, China
| | - Chengzhang Zhu
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China
| | - Qilin Chen
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China
| | - Xuanchu Duan
- The Second Xiangya Hospital of Central South University, Changsha, 410011, China
| |
Collapse
|
42
|
Detection of glaucoma using two dimensional tensor empirical wavelet transform. SN APPLIED SCIENCES 2019. [DOI: 10.1007/s42452-019-1467-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022] Open
|
43
|
Sarhan A, Rokne J, Alhajj R. Glaucoma detection using image processing techniques: A literature review. Comput Med Imaging Graph 2019; 78:101657. [PMID: 31675645 DOI: 10.1016/j.compmedimag.2019.101657] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Revised: 09/02/2019] [Accepted: 09/09/2019] [Indexed: 11/26/2022]
Abstract
The term glaucoma refers to a group of heterogeneous diseases that cause the degeneration of retinal ganglion cells (RGCs). The degeneration of RGCs leads to two main issues: (i) structural changes to the optic nerve head as well as the nerve fiber layer, and (ii) simultaneous functional failure of the visual field. These two effects of glaucoma may lead to peripheral vision loss and, if the condition is left to progress it may eventually lead to blindness. No cure for glaucoma exists apart from early detection and treatment by optometrists and ophthalmologists. The degeneration of RGCs is normally detected from retinal images which are assessed by an expert. These retinal images also provide other vital information about the health of an eye. Thus, it is essential to develop automated techniques for extracting this information. The rapid development of digital images and computer vision techniques have increased the potential for analysis of eye health from images. This paper surveys current approaches to detect glaucoma from 2D and 3D images; both the limitations and possible future directions are highlighted. This study also describes the datasets used for retinal analysis along with existing evaluation algorithms. The main topics covered by this study may be enumerated as follows.
Collapse
Affiliation(s)
- Abdullah Sarhan
- Department of Computer Science, University of Calgary, Calgary, AB, Canada.
| | - Jon Rokne
- Department of Computer Science, University of Calgary, Calgary, AB, Canada
| | - Reda Alhajj
- Department of Computer Science, University of Calgary, Calgary, AB, Canada; Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
| |
Collapse
|
44
|
Benzebouchi NE, Azizi N, Ashour AS, Dey N, Sherratt RS. Multi-modal classifier fusion with feature cooperation for glaucoma diagnosis. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1653383] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Nacer Eddine Benzebouchi
- Computer Science Department, Labged Laboratory, Badji Mokhtar Annaba University, Annaba, Algeria
| | - Nabiha Azizi
- Computer Science Department, Labged Laboratory, Badji Mokhtar Annaba University, Annaba, Algeria
| | - Amira S. Ashour
- Department of Electronics Engineering and Communication Engineering, Tanta University, Tanta, Egypt
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, Kolkata, India
| | - R. Simon Sherratt
- Department of Biomedical Engineering, University of Reading, Reading, UK
| |
Collapse
|
45
|
Medinoid: Computer-Aided Diagnosis and Localization of Glaucoma Using Deep Learning †. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9153064] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Glaucoma is a leading eye disease, causing vision loss by gradually affecting peripheral vision if left untreated. Current diagnosis of glaucoma is performed by ophthalmologists, human experts who typically need to analyze different types of medical images generated by different types of medical equipment: fundus, Retinal Nerve Fiber Layer (RNFL), Optical Coherence Tomography (OCT) disc, OCT macula, perimetry, and/or perimetry deviation. Capturing and analyzing these medical images is labor intensive and time consuming. In this paper, we present a novel approach for glaucoma diagnosis and localization, only relying on fundus images that are analyzed by making use of state-of-the-art deep learning techniques. Specifically, our approach towards glaucoma diagnosis and localization leverages Convolutional Neural Networks (CNNs) and Gradient-weighted Class Activation Mapping (Grad-CAM), respectively. We built and evaluated different predictive models using a large set of fundus images, collected and labeled by ophthalmologists at Samsung Medical Center (SMC). Our experimental results demonstrate that our most effective predictive model is able to achieve a high diagnosis accuracy of 96%, as well as a high sensitivity of 96% and a high specificity of 100% for Dataset-Optic Disc (OD), a set of center-cropped fundus images highlighting the optic disc. Furthermore, we present Medinoid, a publicly-available prototype web application for computer-aided diagnosis and localization of glaucoma, integrating our most effective predictive model in its back-end.
Collapse
|
46
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
47
|
Daien V, Muyl-Cipollina A. [Can Big Data change our practices?]. J Fr Ophtalmol 2019; 42:551-571. [PMID: 30979558 DOI: 10.1016/j.jfo.2018.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Accepted: 11/22/2018] [Indexed: 11/19/2022]
Abstract
The European Medicines Agency has defined Big Data by the "3 V's": Volume, Velocity and Variety. These large databases allow access to real life data on patient care. They are particularly suited for studies of adverse events and pharmacoepidemiology. Deep learning is a collection of algorithms used in machine learning, used to model high-level abstractions in data using model architectures, which are composed of multiple nonlinear transformations. This article shows how Big Data and Deep Learning can help in ophthalmology, pointing out their advantages and disadvantages. A literature review is presented in this article illustrating the uses of Deep Learning in ophthalmology.
Collapse
Affiliation(s)
- V Daien
- Service d'ophtalmologique, hôpital Gui De Chauliac, 80, avenue Augustin Fliche, 34295 Montpellier, France; Inserm, epidemiological and clinical research, université Montpellier, 34295 Montpellier, France; The Save Sight Institute, Sydney Medical School, The University of Sydney, Sydney, Australie
| | - A Muyl-Cipollina
- Service d'ophtalmologique, hôpital Gui De Chauliac, 80, avenue Augustin Fliche, 34295 Montpellier, France.
| |
Collapse
|
48
|
Optic Disc Localization in Complicated Environment of Retinal Image Using Circular-Like Estimation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2019. [DOI: 10.1007/s13369-019-03756-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
49
|
A coarse-to-fine deep learning framework for optic disc segmentation in fundus images. Biomed Signal Process Control 2019; 51:82-89. [PMID: 33850515 DOI: 10.1016/j.bspc.2019.01.022] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Accurate segmentation of the optic disc (OD) depicted on color fundus images may aid in the early detection and quantitative diagnosis of retinal diseases, such as glaucoma and optic atrophy. In this study, we proposed a coarse-to-fine deep learning framework on the basis of a classical convolutional neural network (CNN), known as the U-net model, to accurately identify the optic disc. This network was trained separately on color fundus images and their grayscale vessel density maps, leading to two different segmentation results from the entire image. We combined the results using an overlap strategy to identify a local image patch (disc candidate region), which was then fed into the U-net model for further segmentation. Our experiments demonstrated that the developed framework achieved an average intersection over union (IoU) and a dice similarity coefficient (DSC) of 89.1% and 93.9%, respectively, based on 2,978 test images from our collected dataset and six public datasets, as compared to 87.4% and 92.5% obtained by only using the sole U-net model. The comparison with available approaches demonstrated a reliable and relatively high performance of the proposed deep learning framework in automated OD segmentation.
Collapse
|
50
|
Applications of Artificial Intelligence in Ophthalmology: General Overview. J Ophthalmol 2018; 2018:5278196. [PMID: 30581604 PMCID: PMC6276430 DOI: 10.1155/2018/5278196] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2018] [Revised: 10/06/2018] [Accepted: 10/17/2018] [Indexed: 12/26/2022] Open
Abstract
With the emergence of unmanned plane, autonomous vehicles, face recognition, and language processing, the artificial intelligence (AI) has remarkably revolutionized our lifestyle. Recent studies indicate that AI has astounding potential to perform much better than human beings in some tasks, especially in the image recognition field. As the amount of image data in imaging center of ophthalmology is increasing dramatically, analyzing and processing these data is in urgent need. AI has been tried to apply to decipher medical data and has made extraordinary progress in intelligent diagnosis. In this paper, we presented the basic workflow for building an AI model and systematically reviewed applications of AI in the diagnosis of eye diseases. Future work should focus on setting up systematic AI platforms to diagnose general eye diseases based on multimodal data in the real world.
Collapse
|