1
|
Wang X, Li H, Zheng H, Sun G, Wang W, Yi Z, Xu A, He L, Wang H, Jia W, Li Z, Li C, Ye M, Du B, Chen C. Automatic Detection of 30 Fundus Diseases Using Ultra-Widefield Fluorescein Angiography with Deep Experts Aggregation. Ophthalmol Ther 2024; 13:1125-1144. [PMID: 38416330 DOI: 10.1007/s40123-024-00900-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/26/2024] [Indexed: 02/29/2024] Open
Abstract
INTRODUCTION Inaccurate, untimely diagnoses of fundus diseases leads to vision-threatening complications and even blindness. We built a deep learning platform (DLP) for automatic detection of 30 fundus diseases using ultra-widefield fluorescein angiography (UWFFA) with deep experts aggregation. METHODS This retrospective and cross-sectional database study included a total of 61,609 UWFFA images dating from 2016 to 2021, involving more than 3364 subjects in multiple centers across China. All subjects were divided into 30 different groups. The state-of-the-art convolutional neural network architecture, ConvNeXt, was chosen as the backbone to train and test the receiver operating characteristic curve (ROC) of the proposed system on test data and external test date. We compared the classification performance of the proposed system with that of ophthalmologists, including two retinal specialists. RESULTS We built a DLP to analyze UWFFA, which can detect up to 30 fundus diseases, with a frequency-weighted average area under the receiver operating characteristic curve (AUC) of 0.940 in the primary test dataset and 0.954 in the external multi-hospital test dataset. The tool shows comparable accuracy with retina specialists in diagnosis and evaluation. CONCLUSIONS This is the first study on a large-scale UWFFA dataset for multi-retina disease classification. We believe that our UWFFA DLP advances the diagnosis by artificial intelligence (AI) in various retinal diseases and would contribute to labor-saving and precision medicine especially in remote areas.
Collapse
Affiliation(s)
- Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - He Li
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - A'min Xu
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Lu He
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Haiyan Wang
- Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), No. 21, Jiefang Road, Xi'an, 710004, Shaanxi, China
| | - Wei Jia
- Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), No. 21, Jiefang Road, Xi'an, 710004, Shaanxi, China
| | - Zhiqing Li
- Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, China
| | - Chang Li
- Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, China
| | - Mang Ye
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China.
| | - Bo Du
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China.
| |
Collapse
|
2
|
Uppamma P, Bhattacharya S. A multidomain bio-inspired feature extraction and selection model for diabetic retinopathy severity classification: an ensemble learning approach. Sci Rep 2023; 13:18572. [PMID: 37903967 PMCID: PMC10616283 DOI: 10.1038/s41598-023-45886-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 10/25/2023] [Indexed: 11/01/2023] Open
Abstract
Diabetes retinopathy (DR) is one of the leading causes of blindness globally. Early detection of this condition is essential for preventing patients' loss of eyesight caused by diabetes mellitus being untreated for an extended period. This paper proposes the design of an augmented bioinspired multidomain feature extraction and selection model for diabetic retinopathy severity estimation using an ensemble learning process. The proposed approach initiates by identifying DR severity levels from retinal images that segment the optical disc, macula, blood vessels, exudates, and hemorrhages using an adaptive thresholding process. Once the images are segmented, multidomain features are extracted from the retinal images, including frequency, entropy, cosine, gabor, and wavelet components. These data were fed into a novel Modified Moth Flame Optimization-based feature selection method that assisted in optimal feature selection. Finally, an ensemble model using various ML (machine learning) algorithms, which included Naive Bayes, K-Nearest Neighbours, Support Vector Machine, Multilayer Perceptron, Random Forests, and Logistic Regression were used to identify the various severity complications of DR. The experiments on different openly accessible data sources have shown that the proposed method outperformed conventional methods and achieved an Accuracy of 96.5% in identifying DR severity levels.
Collapse
Affiliation(s)
- Posham Uppamma
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014, India
| | - Sweta Bhattacharya
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014, India.
| |
Collapse
|
3
|
Susheel Kumar K, Pratap Singh N. Identification of retinal diseases based on retinal blood vessel segmentation using Dagum PDF and feature-based machine learning. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Affiliation(s)
- K. Susheel Kumar
- Department of Computer science and Engineering, National Institute of Technology, Hamirpur, India
- Department of Computer Science and Engineering, Gandhi Institute of Technology and Management, Bengaluru, India
| | - Nagendra Pratap Singh
- Department of Computer science and Engineering, National Institute of Technology, Hamirpur, India
| |
Collapse
|
4
|
Shafay M, Ahmad RW, Salah K, Yaqoob I, Jayaraman R, Omar M. Blockchain for deep learning: review and open challenges. CLUSTER COMPUTING 2023; 26:197-221. [PMID: 35309043 PMCID: PMC8919362 DOI: 10.1007/s10586-022-03582-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 01/11/2022] [Accepted: 03/03/2022] [Indexed: 05/17/2023]
Abstract
Deep learning has gained huge traction in recent years because of its potential to make informed decisions. A large portion of today's deep learning systems are based on centralized servers and fall short in providing operational transparency, traceability, reliability, security, and trusted data provenance features. Also, training deep learning models by utilizing centralized data is vulnerable to the single point of failure problem. In this paper, we explore the importance of integrating blockchain technology with deep learning. We review the existing literature focused on the integration of blockchain with deep learning. We classify and categorize the literature by devising a thematic taxonomy based on seven parameters; namely, blockchain type, deep learning models, deep learning specific consensus protocols, application area, services, data types, and deployment goals. We provide insightful discussions on the state-of-the-art blockchain-based deep learning frameworks by highlighting their strengths and weaknesses. Furthermore, we compare the existing blockchain-based deep learning frameworks based on four parameters such as blockchain type, consensus protocol, deep learning method, and dataset. Finally, we present important research challenges which need to be addressed to develop highly trustworthy deep learning frameworks.
Collapse
Affiliation(s)
- Muhammad Shafay
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, 127788 UAE
| | - Raja Wasim Ahmad
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, 127788 UAE
- College of Engineering and Information Technology, Ajman University, Ajman, UAE
| | - Khaled Salah
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, 127788 UAE
| | - Ibrar Yaqoob
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, 127788 UAE
| | - Raja Jayaraman
- Department of Industrial & Systems Engineering, Khalifa University, Abu Dhabi, 127788 UAE
| | - Mohammed Omar
- Department of Industrial & Systems Engineering, Khalifa University, Abu Dhabi, 127788 UAE
| |
Collapse
|
5
|
Nadeem MW, Goh HG, Hussain M, Liew SY, Andonovic I, Khan MA. Deep Learning for Diabetic Retinopathy Analysis: A Review, Research Challenges, and Future Directions. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22186780. [PMID: 36146130 PMCID: PMC9505428 DOI: 10.3390/s22186780] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/02/2022] [Accepted: 08/08/2022] [Indexed: 05/12/2023]
Abstract
Deep learning (DL) enables the creation of computational models comprising multiple processing layers that learn data representations at multiple levels of abstraction. In the recent past, the use of deep learning has been proliferating, yielding promising results in applications across a growing number of fields, most notably in image processing, medical image analysis, data analysis, and bioinformatics. DL algorithms have also had a significant positive impact through yielding improvements in screening, recognition, segmentation, prediction, and classification applications across different domains of healthcare, such as those concerning the abdomen, cardiac, pathology, and retina. Given the extensive body of recent scientific contributions in this discipline, a comprehensive review of deep learning developments in the domain of diabetic retinopathy (DR) analysis, viz., screening, segmentation, prediction, classification, and validation, is presented here. A critical analysis of the relevant reported techniques is carried out, and the associated advantages and limitations highlighted, culminating in the identification of research gaps and future challenges that help to inform the research community to develop more efficient, robust, and accurate DL models for the various challenges in the monitoring and diagnosis of DR.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Hock Guan Goh
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
- Correspondence: (H.G.G.); (I.A.)
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan
| | - Soung-Yue Liew
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Ivan Andonovic
- Department of Electronic and Electrical Engineering, Royal College Building, University of Strathclyde, 204 George St., Glasgow G1 1XW, UK
- Correspondence: (H.G.G.); (I.A.)
| | - Muhammad Adnan Khan
- Pattern Recognition and Machine Learning Lab, Department of Software, Gachon University, Seongnam 13557, Korea
- Faculty of Computing, Riphah School of Computing and Innovation, Riphah International University, Lahore Campus, Lahore 54000, Pakistan
| |
Collapse
|
6
|
Kobat SG, Baygin N, Yusufoglu E, Baygin M, Barua PD, Dogan S, Yaman O, Celiker U, Yildirim H, Tan RS, Tuncer T, Islam N, Acharya UR. Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images. Diagnostics (Basel) 2022; 12:diagnostics12081975. [PMID: 36010325 PMCID: PMC9406859 DOI: 10.3390/diagnostics12081975] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 08/08/2022] [Accepted: 08/13/2022] [Indexed: 12/23/2022] Open
Abstract
Diabetic retinopathy (DR) is a common complication of diabetes that can lead to progressive vision loss. Regular surveillance with fundal photography, early diagnosis, and prompt intervention are paramount to reducing the incidence of DR-induced vision loss. However, manual interpretation of fundal photographs is subject to human error. In this study, a new method based on horizontal and vertical patch division was proposed for the automated classification of DR images on fundal photographs. The novel sides of this study are given as follows. We proposed a new non-fixed-size patch division model to obtain high classification results and collected a new fundus image dataset. Moreover, two datasets are used to test the model: a newly collected three-class (normal, non-proliferative DR, and proliferative DR) dataset comprising 2355 DR images and the established open-access five-class Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset comprising 3662 images. Two analysis scenarios, Case 1 and Case 2, with three (normal, non-proliferative DR, and proliferative DR) and five classes (normal, mild DR, moderate DR, severe DR, and proliferative DR), respectively, were derived from the APTOS 2019 dataset. These datasets and these cases have been used to demonstrate the general classification performance of our proposal. By applying transfer learning, the last fully connected and global average pooling layers of the DenseNet201 architecture were used to extract deep features from input DR images and each of the eight subdivided horizontal and vertical patches. The most discriminative features are then selected using neighborhood component analysis. These were fed as input to a standard shallow cubic support vector machine for classification. Our new DR dataset obtained 94.06% and 91.55% accuracy values for three-class classification with 80:20 hold-out validation and 10-fold cross-validation, respectively. As can be seen from steps of the proposed model, a new patch-based deep-feature engineering model has been proposed. The proposed deep-feature engineering model is a cognitive model, since it uses efficient methods in each phase. Similar excellent results were seen for three-class classification with the Case 1 dataset. In addition, the model attained 87.43% and 84.90% five-class classification accuracy rates using 80:20 hold-out validation and 10-fold cross-validation, respectively, on the Case 2 dataset, which outperformed prior DR classification studies based on the five-class APTOS 2019 dataset. Our model attained about >2% classification results compared to others. These findings demonstrate the accuracy and robustness of the proposed model for classification of DR images.
Collapse
Affiliation(s)
- Sabiha Gungor Kobat
- Department of Ophthalmology, Firat University Hospital, Firat University, Elazig 23119, Turkey
| | - Nursena Baygin
- Department of Computer Engineering, Faculty of Engineering, Kafkas University, Kars 36000, Turkey
| | - Elif Yusufoglu
- Department of Ophthalmology, Elazig Fethi Sekin City Hospital, Elazig 23280, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan 75000, Turkey
| | - Prabal Datta Barua
- School of Management & Enterprise, University of Southern Queensland, Darling Heights, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Turkey
- Correspondence: ; Tel.: +90-424-2370000-7634
| | - Orhan Yaman
- Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Turkey
| | - Ulku Celiker
- Department of Ophthalmology, Firat University Hospital, Firat University, Elazig 23119, Turkey
| | - Hakan Yildirim
- Department of Ophthalmology, Firat University Hospital, Firat University, Elazig 23119, Turkey
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre Singapore, Singapore 169609, Singapore or
- Duke-NUS Medical Centre, Singapore 169857, Singapore
| | - Turker Tuncer
- Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Turkey
| | - Nazrul Islam
- Glaucoma Faculty, Bangladesh Eye Hospital & Institute, Dhaka 1209, Bangladesh
| | - U. Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, Singapore 599489, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore 599494, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
| |
Collapse
|
7
|
Lin PK, Chiu YH, Huang CJ, Wang CY, Pan ML, Wang DW, Mark Liao HY, Chen YS, Kuan CH, Lin SY, Chen LF. PADAr: physician-oriented artificial intelligence-facilitating diagnosis aid for retinal diseases. J Med Imaging (Bellingham) 2022; 9:044501. [PMID: 35903415 PMCID: PMC9311486 DOI: 10.1117/1.jmi.9.4.044501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 07/01/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Retinopathy screening via digital imaging is promising for early detection and timely treatment, and tracking retinopathic abnormality over time can help to reveal the risk of disease progression. We developed an innovative physician-oriented artificial intelligence-facilitating diagnosis aid system for retinal diseases for screening multiple retinopathies and monitoring the regions of potential abnormality over time. Approach: Our dataset contains 4908 fundus images from 304 eyes with image-level annotations, including diabetic retinopathy, age-related macular degeneration, cellophane maculopathy, pathological myopia, and healthy control (HC). The screening model utilized a VGG-based feature extractor and multiple-binary convolutional neural network-based classifiers. Images in time series were aligned via affine transforms estimated through speeded-up robust features. Heatmaps of retinopathy were generated from the feature extractor using gradient-weighted class activation mapping++, and individual candidate retinopathy sites were identified from the heatmaps using clustering algorithm. Nested cross-validation with a train-to-test split of 80% to 20% was used to evaluate the performance of the screening model. Results: Our screening model achieved 99% accuracy, 93% sensitivity, and 97% specificity in discriminating between patients with retinopathy and HCs. For discriminating between types of retinopathy, our model achieved an averaged performance of 80% accuracy, 78% sensitivity, 94% specificity, 79% F1-score, and Cohen's kappa coefficient of 0.70. Moreover, visualization results were also shown to provide reasonable candidate sites of retinopathy. Conclusions: Our results demonstrated the capability of the proposed model for extracting diagnostic information of the abnormality and lesion locations, which allows clinicians to focus on patient-centered treatment and untangles the pathological plausibility hidden in deep learning models.
Collapse
Affiliation(s)
- Po-Kang Lin
- National Taiwan University, Graduate Institute of Biomedical Electronics and Bioinformatics, Taipei, Taiwan.,National Yang Ming Chiao Tung University, School of Medicine, Department of Ophthalmology, Taipei, Taiwan.,Taipei Veterans General Hospital, Department of Ophthalmology, Taipei, Taiwan
| | - Yu-Hsien Chiu
- National Yang Ming Chiao Tung University, Institute of Brain Science, College of Medicine, Taipei, Taiwan.,Academia Sinica, Institute of Information Science, Taipei, Taiwan
| | - Chiu-Jung Huang
- National Yang Ming Chiao Tung University, Institute of Brain Science, College of Medicine, Taipei, Taiwan.,Taipei Veterans General Hospital, Integrated Brain Research Unit, Department of Medical Research, Taipei, Taiwan
| | - Chien-Yao Wang
- Academia Sinica, Institute of Information Science, Taipei, Taiwan
| | - Mei-Lien Pan
- Academia Sinica, Institute of Information Science, Taipei, Taiwan.,National Yang Ming Chiao Tung University, Information Technology Service Center, Taipei, Taiwan
| | - Da-Wei Wang
- Academia Sinica, Institute of Information Science, Taipei, Taiwan
| | | | - Yong-Sheng Chen
- National Yang Ming Chiao Tung University, Department of Computer Science, Hsinchu, Taiwan
| | - Chieh-Hsiung Kuan
- National Taiwan University, Department of Electrical Engineering, Taipei, Taiwan.,National Taiwan University, Gaduate Institute of Biomedical Electronics and Bioinformatics, Taipei, Taiwan
| | - Shih-Yen Lin
- National Yang Ming Chiao Tung University, Department of Computer Science, Hsinchu, Taiwan
| | - Li-Fen Chen
- National Yang Ming Chiao Tung University, Institute of Brain Science, College of Medicine, Taipei, Taiwan.,Taipei Veterans General Hospital, Integrated Brain Research Unit, Department of Medical Research, Taipei, Taiwan
| |
Collapse
|
8
|
A review of diabetic retinopathy: Datasets, approaches, evaluation metrics and future trends. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.06.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
9
|
Sappa LB, Okuwobi IP, Li M, Zhang Y, Xie S, Yuan S, Chen Q. RetFluidNet: Retinal Fluid Segmentation for SD-OCT Images Using Convolutional Neural Network. J Digit Imaging 2021; 34:691-704. [PMID: 34080105 PMCID: PMC8329142 DOI: 10.1007/s10278-021-00459-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 12/03/2020] [Accepted: 04/29/2021] [Indexed: 11/25/2022] Open
Abstract
Age-related macular degeneration (AMD) is one of the leading causes of irreversible blindness and is characterized by fluid-related accumulations such as intra-retinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED). Spectral-domain optical coherence tomography (SD-OCT) is the primary modality used to diagnose AMD, yet it does not have algorithms that directly detect and quantify the fluid. This work presents an improved convolutional neural network (CNN)-based architecture called RetFluidNet to segment three types of fluid abnormalities from SD-OCT images. The model assimilates different skip-connect operations and atrous spatial pyramid pooling (ASPP) to integrate multi-scale contextual information; thus, achieving the best performance. This work also investigates between consequential and comparatively inconsequential hyperparameters and skip-connect techniques for fluid segmentation from the SD-OCT image to indicate the starting choice for future related researches. RetFluidNet was trained and tested on SD-OCT images from 124 patients and achieved an accuracy of 80.05%, 92.74%, and 95.53% for IRF, PED, and SRF, respectively. RetFluidNet showed significant improvement over competitive works to be clinically applicable in reasonable accuracy and time efficiency. RetFluidNet is a fully automated method that can support early detection and follow-up of AMD.
Collapse
Affiliation(s)
- Loza Bekalo Sappa
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Idowu Paul Okuwobi
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Sha Xie
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital With Nanjing Medical University, 300 Guangzhou Road, Nanjing, 210029, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China.
| |
Collapse
|
10
|
Riaz H, Park J, H. Kim P, Kim J. Retinal Healthcare Diagnosis Approaches with Deep Learning Techniques. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3309] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The retina is an important organ of the human body, with a crucial function in the vision mechanism. A minor disturbance in the retina can cause various abnormalities in the eye, as well as complex retinal diseases such as diabetic retinopathy. To diagnose such diseases in early stages,
many researchers are incorporating machine learning (ML) technique. The combination of medical science with ML improves the healthcare diagnosis systems of hospitals, clinics, and other providers. Recently, AI-based healthcare diagnosis systems assist clinicians in handling more patients in
less time and improves diagnosis accuracy. In this paper, we review cutting-edge AI-based retinal diagnosis technologies. This article also briefly describes the potential of the latest densely connected convolutional networks (DenseNets) to improve the performance of diagnosis systems. Moreover,
this paper focuses on state-of-the-art results from comprehensive investigations in retinal diagnosis and the development of AI-based retinal healthcare diagnosis approaches with deep-learning models.
Collapse
Affiliation(s)
- Hamza Riaz
- Department of Health Science and Technology, Gachon Advanced Institute for Health Sciences & Technology, Incheon 21999, Korea
| | - Jisu Park
- Department of Health Science and Technology, Gachon Advanced Institute for Health Sciences & Technology, Incheon 21999, Korea
| | - Peter H. Kim
- School of Information, University of California, Berkeley, 102 South Hall #4600, CA 94720, USA
| | - Jungsuk Kim
- Department of Biomedical Engineering, Gachon University, 534-2, Hambakmoe-ro, 21936, Incheon, Korea
| |
Collapse
|
11
|
Yoo TK, Choi JY, Kim HK. Feasibility study to improve deep learning in OCT diagnosis of rare retinal diseases with few-shot classification. Med Biol Eng Comput 2021; 59:401-415. [PMID: 33492598 PMCID: PMC7829497 DOI: 10.1007/s11517-021-02321-1] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 01/15/2021] [Indexed: 01/16/2023]
Abstract
Deep learning (DL) has been successfully applied to the diagnosis of ophthalmic diseases. However, rare diseases are commonly neglected due to insufficient data. Here, we demonstrate that few-shot learning (FSL) using a generative adversarial network (GAN) can improve the applicability of DL in the optical coherence tomography (OCT) diagnosis of rare diseases. Four major classes with a large number of datasets and five rare disease classes with a few-shot dataset are included in this study. Before training the classifier, we constructed GAN models to generate pathological OCT images of each rare disease from normal OCT images. The Inception-v3 architecture was trained using an augmented training dataset, and the final model was validated using an independent test dataset. The synthetic images helped in the extraction of the characteristic features of each rare disease. The proposed DL model demonstrated a significant improvement in the accuracy of the OCT diagnosis of rare retinal diseases and outperformed the traditional DL models, Siamese network, and prototypical network. By increasing the accuracy of diagnosing rare retinal diseases through FSL, clinicians can avoid neglecting rare diseases with DL assistance, thereby reducing diagnosis delay and patient burden.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Medical Research Center, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Sangdang-gu, Cheongju, South Korea.
| | - Joon Yul Choi
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
12
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
13
|
Sarki R, Ahmed K, Wang H, Zhang Y. Automated detection of mild and multi-class diabetic eye diseases using deep learning. Health Inf Sci Syst 2020; 8:32. [PMID: 33088488 DOI: 10.1007/s13755-020-00125-5] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 09/23/2020] [Indexed: 01/03/2023] Open
Abstract
Diabetic eye disease is a collection of ocular problems that affect patients with diabetes. Thus, timely screening enhances the chances of timely treatment and prevents permanent vision impairment. Retinal fundus images are a useful resource to diagnose retinal complications for ophthalmologists. However, manual detection can be laborious and time-consuming. Therefore, developing an automated diagnose system reduces the time and workload for ophthalmologists. Recently, the image classification using Deep Learning (DL) in between healthy or diseased retinal fundus image classification already achieved a state of the art performance. While the classification of mild and multi-class diseases remains an open challenge, therefore, this research aimed to build an automated classification system considering two scenarios: (i) mild multi-class diabetic eye disease (DED), and (ii) multi-class DED. Our model tested on various datasets, annotated by an opthalmologist. The experiment conducted employing the top two pretrained convolutional neural network (CNN) models on ImageNet. Furthermore, various performance improvement techniques were employed, i.e., fine-tune, optimization, and contrast enhancement. Maximum accuracy of 88.3% obtained on the VGG16 model for multi-class classification and 85.95% for mild multi-class classification.
Collapse
Affiliation(s)
- Rubina Sarki
- Victoria University, Ballarat Road, Melbourne, VIC 3011 USA
| | | | - Hua Wang
- Victoria University, Ballarat Road, Melbourne, VIC 3011 USA
| | - Yanchun Zhang
- Victoria University, Ballarat Road, Melbourne, VIC 3011 USA
| |
Collapse
|
14
|
Xu C, Qi S, Feng J, Xia S, Kang Y, Yao Y, Qian W. DCT-MIL: Deep CNN transferred multiple instance learning for COPD identification using CT images. Phys Med Biol 2020; 65:145011. [PMID: 32235077 DOI: 10.1088/1361-6560/ab857d] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
While many pre-defined computed tomographic (CT) measures have been utilized to characterize chronic obstructive pulmonary disease (COPD), it is still challenging to represent pathological alternations of multiple dimensions and highly spatial heterogeneity. Deep CNN transferred multiple instance learning (DCT-MIL) is proposed to identify COPD via CT images. After the lung is divided into eight sections along the axial direction, one random axial CT image is taken out from each section as one instance. With one instance as the input, the activations of neural layers of AlexNet trained by natural images are extracted as features. After dimension reduction through principle component analysis, features of all instances are input into three MIL methods: Citation k-Nearest-Neighbor (Citation-KNN), multiple instance support vector machine, and expectation-maximization diverse density. Moreover, the performance dependence of the resulted models on the depth of the neural layer where activations are extracted and the number of features is investigated. The proposed DCT-MIL achieves an exceptional performance with an accuracy of 99.29% and area under curve of 0.9826 while using 100 principle components of features extracted from the fourth convolutional layer and Citation-KNN. It outperforms not only DCT-MIL models using other settings and the pre-trained AlexNet with fine-tuning by montages of eight lung CT images, but also other state-of-art methods. Deep CNN transferred multiple instance learning is suited for identification of COPD using CT images. It can help finding subgroups with high risk of COPD from large populations through CT scans ordered doing lung cancer screening.
Collapse
Affiliation(s)
- Caiwen Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|
15
|
Balakrishnan N, Rajendran A, Palanivel K. Meticulous fuzzy convolution C means for optimized big data analytics: adaptation towards deep learning. INT J MACH LEARN CYB 2019. [DOI: 10.1007/s13042-019-00945-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
16
|
Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artif Intell Med 2019; 99:101701. [DOI: 10.1016/j.artmed.2019.07.009] [Citation(s) in RCA: 95] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 07/19/2019] [Accepted: 07/26/2019] [Indexed: 02/06/2023]
|
17
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
18
|
|
19
|
Park SJ, Shin JY, Kim S, Son J, Jung KH, Park KH. A Novel Fundus Image Reading Tool for Efficient Generation of a Multi-dimensional Categorical Image Database for Machine Learning Algorithm Training. J Korean Med Sci 2018; 33:e239. [PMID: 30344460 PMCID: PMC6193885 DOI: 10.3346/jkms.2018.33.e239] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Accepted: 07/10/2018] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND We described a novel multi-step retinal fundus image reading system for providing high-quality large data for machine learning algorithms, and assessed the grader variability in the large-scale dataset generated with this system. METHODS A 5-step retinal fundus image reading tool was developed that rates image quality, presence of abnormality, findings with location information, diagnoses, and clinical significance. Each image was evaluated by 3 different graders. Agreements among graders for each decision were evaluated. RESULTS The 234,242 readings of 79,458 images were collected from 55 licensed ophthalmologists during 6 months. The 34,364 images were graded as abnormal by at-least one rater. Of these, all three raters agreed in 46.6% in abnormality, while 69.9% of the images were rated as abnormal by two or more raters. Agreement rate of at-least two raters on a certain finding was 26.7%-65.2%, and complete agreement rate of all-three raters was 5.7%-43.3%. As for diagnoses, agreement of at-least two raters was 35.6%-65.6%, and complete agreement rate was 11.0%-40.0%. Agreement of findings and diagnoses were higher when restricted to images with prior complete agreement on abnormality. Retinal/glaucoma specialists showed higher agreements on findings and diagnoses of their corresponding subspecialties. CONCLUSION This novel reading tool for retinal fundus images generated a large-scale dataset with high level of information, which can be utilized in future development of machine learning-based algorithms for automated identification of abnormal conditions and clinical decision supporting system. These results emphasize the importance of addressing grader variability in algorithm developments.
Collapse
Affiliation(s)
- Sang Jun Park
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea
| | - Joo Young Shin
- Department of Ophthalmology, Dongguk University Ilsan Hospital, Goyang, Korea
| | | | | | | | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea
| |
Collapse
|
20
|
Analysis on the potential of an EA–surrogate modelling tandem for deep learning parametrization: an example for cancer classification from medical images. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3709-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
21
|
Balakrishnan N, Nisi K. A deep analysis on optimization techniques for appropriate PID tuning to incline efficient artificial pancreas. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3687-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
22
|
Grid-Based Crime Prediction Using Geographical Features. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2018. [DOI: 10.3390/ijgi7080298] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Machine learning is useful for grid-based crime prediction. Many previous studies have examined factors including time, space, and type of crime, but the geographic characteristics of the grid are rarely discussed, leaving prediction models unable to predict crime displacement. This study incorporates the concept of a criminal environment in grid-based crime prediction modeling, and establishes a range of spatial-temporal features based on 84 types of geographic information by applying the Google Places API to theft data for Taoyuan City, Taiwan. The best model was found to be Deep Neural Networks, which outperforms the popular Random Decision Forest, Support Vector Machine, and K-Near Neighbor algorithms. After tuning, compared to our design’s baseline 11-month moving average, the F1 score improves about 7% on 100-by-100 grids. Experiments demonstrate the importance of the geographic feature design for improving performance and explanatory ability. In addition, testing for crime displacement also shows that our model design outperforms the baseline.
Collapse
|
23
|
Quellec G, Charrière K, Boudi Y, Cochener B, Lamard M. Deep image mining for diabetic retinopathy screening. Med Image Anal 2017; 39:178-193. [PMID: 28511066 DOI: 10.1016/j.media.2017.04.012] [Citation(s) in RCA: 148] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 04/18/2017] [Accepted: 04/27/2017] [Indexed: 01/29/2023]
Abstract
Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.
Collapse
Affiliation(s)
- Gwenolé Quellec
- Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France.
| | - Katia Charrière
- IMT Atlantique, Département ITI, Technopôle Brest-Iroise, CS 83818, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| | - Yassine Boudi
- IMT Atlantique, Département ITI, Technopôle Brest-Iroise, CS 83818, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| | - Béatrice Cochener
- Université de Bretagne Occidentale, 3 rue des Archives, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France; Service d'Ophtalmologie, CHRU Brest, 2 avenue Foch, Brest F-29200, France
| | - Mathieu Lamard
- Université de Bretagne Occidentale, 3 rue des Archives, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| |
Collapse
|