1
|
Ishtiaq U, Abdullah ERMF, Ishtiaque Z. A Hybrid Technique for Diabetic Retinopathy Detection Based on Ensemble-Optimized CNN and Texture Features. Diagnostics (Basel) 2023; 13:diagnostics13101816. [PMID: 37238304 DOI: 10.3390/diagnostics13101816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
One of the most prevalent chronic conditions that can result in permanent vision loss is diabetic retinopathy (DR). Diabetic retinopathy occurs in five stages: no DR, and mild, moderate, severe, and proliferative DR. The early detection of DR is essential for preventing vision loss in diabetic patients. In this paper, we propose a method for the detection and classification of DR stages to determine whether patients are in any of the non-proliferative stages or in the proliferative stage. The hybrid approach based on image preprocessing and ensemble features is the foundation of the proposed classification method. We created a convolutional neural network (CNN) model from scratch for this study. Combining Local Binary Patterns (LBP) and deep learning features resulted in the creation of the ensemble features vector, which was then optimized using the Binary Dragonfly Algorithm (BDA) and the Sine Cosine Algorithm (SCA). Moreover, this optimized feature vector was fed to the machine learning classifiers. The SVM classifier achieved the highest classification accuracy of 98.85% on a publicly available dataset, i.e., Kaggle EyePACS. Rigorous testing and comparisons with state-of-the-art approaches in the literature indicate the effectiveness of the proposed methodology.
Collapse
Affiliation(s)
- Uzair Ishtiaq
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
- Department of Computer Science, COMSATS University Islamabad, Vehari Campus, Vehari 61100, Pakistan
| | - Erma Rahayu Mohd Faizal Abdullah
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
| | - Zubair Ishtiaque
- Department of Analytical, Biopharmaceutical and Medical Sciences, Atlantic Technological University, H91 T8NW Galway, Ireland
| |
Collapse
|
2
|
Li Z, Guo C, Nie D, Lin D, Cui T, Zhu Y, Chen C, Zhao L, Zhang X, Dongye M, Wang D, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Automated detection of retinal exudates and drusen in ultra-widefield fundus images based on deep learning. Eye (Lond) 2021; 36:1681-1686. [PMID: 34345030 PMCID: PMC9307785 DOI: 10.1038/s41433-021-01715-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 07/14/2021] [Accepted: 07/22/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Retinal exudates and/or drusen (RED) can be signs of many fundus diseases that can lead to irreversible vision loss. Early detection and treatment of these diseases are critical for improving vision prognosis. However, manual RED screening on a large scale is time-consuming and labour-intensive. Here, we aim to develop and assess a deep learning system for automated detection of RED using ultra-widefield fundus (UWF) images. METHODS A total of 26,409 UWF images from 14,994 subjects were used to develop and evaluate the deep learning system. The Zhongshan Ophthalmic Center (ZOC) dataset was selected to compare the performance of the system to that of retina specialists in RED detection. The saliency map visualization technique was used to understand which areas in the UWF image had the most influence on our deep learning system when detecting RED. RESULTS The system for RED detection achieved areas under the receiver operating characteristic curve of 0.994 (95% confidence interval [CI]: 0.991-0.996), 0.972 (95% CI: 0.957-0.984), and 0.988 (95% CI: 0.983-0.992) in three independent datasets. The performance of the system in the ZOC dataset was comparable to that of an experienced retina specialist. Regions of RED were highlighted by saliency maps in UWF images. CONCLUSIONS Our deep learning system is reliable in the automated detection of RED in UWF images. As a screening tool, our system may promote the early diagnosis and management of RED-related fundus diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xulin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, Inner Mongolia, China
| | - Yu Han
- EYE & ENT Hospital of Fudan University, Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China. .,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
3
|
Lin L, Li M, Huang Y, Cheng P, Xia H, Wang K, Yuan J, Tang X. The SUSTech-SYSU dataset for automated exudate detection and diabetic retinopathy grading. Sci Data 2020; 7:409. [PMID: 33219237 PMCID: PMC7679367 DOI: 10.1038/s41597-020-00755-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 11/03/2020] [Indexed: 11/09/2022] Open
Abstract
Automated detection of exudates from fundus images plays an important role in diabetic retinopathy (DR) screening and evaluation, for which supervised or semi-supervised learning methods are typically preferred. However, a potential limitation of supervised and semi-supervised learning based detection algorithms is that they depend substantially on the sample size of training data and the quality of annotations, which is the fundamental motivation of this work. In this study, we construct a dataset containing 1219 fundus images (from DR patients and healthy controls) with annotations of exudate lesions. In addition to exudate annotations, we also provide four additional labels for each image: left-versus-right eye label, DR grade (severity scale) from three different grading protocols, the bounding box of the optic disc (OD), and fovea location. This dataset provides a great opportunity to analyze the accuracy and reliability of different exudate detection, OD detection, fovea localization, and DR classification algorithms. Moreover, it will facilitate the development of such algorithms in the realm of supervised and semi-supervised learning.
Collapse
Affiliation(s)
- Li Lin
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 5180000, China.,School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, 510000, China
| | - Meng Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510000, China
| | - Yijin Huang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 5180000, China
| | - Pujin Cheng
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 5180000, China
| | - Honghui Xia
- Department of Ophthalmology, Gaoyao People's Hospital, Zhaoqing, 526000, China
| | - Kai Wang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, 510000, China
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510000, China.
| | - Xiaoying Tang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 5180000, China.
| |
Collapse
|
4
|
Zhang X, Chen W, Li G, Li W. The Use of Texture Features to Extract and Analyze Useful Information from Retinal Images. Comb Chem High Throughput Screen 2020; 23:313-318. [DOI: 10.2174/1386207322666191022123445] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 07/19/2019] [Accepted: 09/30/2019] [Indexed: 11/22/2022]
Abstract
Background:
The analysis of retinal images can help to detect retinal abnormalities that
are caused by cardiovascular and retinal disorders.
Objective:
In this paper, we propose methods based on texture features for mining and analyzing
information from retinal images.
Methods:
The recognition of the retinal mask region is a prerequisite for retinal image processing.
However, there is no way to automatically recognize the retinal region. By quantifying and
analyzing texture features, a method is proposed to automatically identify the retinal region. The
boundary of the circular retinal region is detected based on the image texture contrast feature,
followed by the filling of the closed circular area, and then the detected circular retinal mask region
can be obtained.
Results:
The experimental results show that the method based on the image contrast feature can be
used to detect the retinal region automatically. The average accuracy of retinal mask region detection
of images from the Digital Retinal Images for Vessel Extraction (DRIVE) database was 99.34%.
Conclusion:
This is the first time these texture features of retinal images are analyzed, and texture
features are used to recognize the circular retinal region automatically.
Collapse
Affiliation(s)
- Xiaobo Zhang
- College of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
| | - Weiyang Chen
- College of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
| | - Gang Li
- Shandong Computer Science Center (National Supercomputer Center in Jinan), Shandong Provincial Key Laboratory of Computer Networks, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
| | - Weiwei Li
- College of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
| |
Collapse
|
5
|
S K, D M. Distinguising Proof of Diabetic Retinopathy Detection by Hybrid Approaches in Two Dimensional Retinal Fundus Images. J Med Syst 2019; 43:173. [PMID: 31069550 DOI: 10.1007/s10916-019-1313-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 04/25/2019] [Indexed: 12/29/2022]
Abstract
Diabetes is characterized by constant high level of blood glucose. The human body needs to maintain insulin at very constrict range. The patients who are all affected by diabetes for a long time affected by eye disease called Diabetic Retinopathy (DR). The retinal landmarks namely Optic disc is predicted and masked to decrease the false positive in the exudates detection. The abnormalities like Exudates, Microaneurysms and Hemorrhages are segmented to classify the various stages of DR. The proposed approach is employed to separate the landmarks of retina and lesions of retina for the classification of stages of DR. The segmentation algorithms like Gabor double-sided hysteresis thresholding, maximum intensity variation, inverse surface adaptive thresholding, multi-agent approach and toboggan segmentation are used to detect and segment BVs, ODs, EXs, MAs and HAs. The feature vector formation and machine learning algorithm used to classify the various stages of DR are evaluated using images available in various retinal databases, and their performance measures are presented in this paper.
Collapse
Affiliation(s)
- Karkuzhali S
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education ( Deemed to be University), Srivilliputtur, Tamilnadu, India.
| | - Manimegalai D
- Department of Information Technology, National Engineering College, Kovilpatti, Tamilnadu, India
| |
Collapse
|