1
|
Chandhakanond P, Aimmanee P. Diabetic retinopathy detection via exudates and hemorrhages segmentation using iterative NICK thresholding, watershed, and Chi 2 feature ranking. Sci Rep 2025; 15:5541. [PMID: 39953248 PMCID: PMC11829032 DOI: 10.1038/s41598-025-90048-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Accepted: 02/10/2025] [Indexed: 02/17/2025] Open
Abstract
Diabetic retinopathy (DR) is a common eye condition that affects one-third of patients with diabetes, leading to vision loss in both working-age and elderly populations. Early detection and intervention can improve patient outcomes and reduce the burden on healthcare. By developing robust computational techniques, we can advance automated systems for screening and managing diabetic retinopathy. Our specific goal is to detect and segment exudates and hemorrhages in fundus images. In this study, we used the iterative NICK thresholding region growing (INRG) method as a basis. To further improve our results in different applications, we incorporated the watershed separation algorithm (WS) and the Chi2 feature selection method (Chi2) on expanded feature sets. These algorithms were combined with the INRG method to segment hemorrhages and exudates. The segmentation results were used to detect the hemorrhages and exudates, which in turn were used to detect diabetic retinopathy. To evaluate our approach, we compared the results against two traditional methods and two state-of-the-art methods, including the original INRG-HSV model. In terms of hemorrhage segmentation, the INRG with WS (INRG-WS) achieved the highest F-measure of 64.76%, outperforming all other comparative methods. For exudate segmentation, the model INRG-WS- Chi2, which used the combined INRG method with WS and Chi2 ranking on expanded feature sets, performed the best. When it came to hemorrhage detection, the INRG method without WS and using only hue, saturation, and brightness (INRG-HSV) achieved the highest accuracy of 90.27% with the lowest false negative rate (FNR) of 9.39%. For exudate detection, the model INRG-WS-HSV, which used the combined INRG method with WS and only hue, saturation, and brightness, offered the highest accuracy rate of 88.14% and the lowest FNR rate of 8.75%. To detect diabetic retinopathy, we compared the performance of our best hemorrhage detection model (INRG-HSV) and exudate detection model (INRG-WS-HSV) against a state-of-the-art method. Our models significantly outperformed the state-of-the-art method (DT-HSVE), achieving an accuracy of 89.89% and an FNR of 3.66%.
Collapse
Affiliation(s)
- Patsaphon Chandhakanond
- Department of Information Computer and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, 131 Moo 5, Tivanont Rd, Bangkadi, Meung, 12000, Patumthani, Thailand
| | - Pakinee Aimmanee
- Department of Information Computer and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, 131 Moo 5, Tivanont Rd, Bangkadi, Meung, 12000, Patumthani, Thailand.
| |
Collapse
|
2
|
Amin J, Shazadi I, Sharif M, Yasmin M, Almujally NA, Nam Y. Localization and grading of NPDR lesions using ResNet-18-YOLOv8 model and informative features selection for DR classification based on transfer learning. Heliyon 2024; 10:e30954. [PMID: 38779022 PMCID: PMC11109848 DOI: 10.1016/j.heliyon.2024.e30954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 05/04/2024] [Accepted: 05/08/2024] [Indexed: 05/25/2024] Open
Abstract
Complications in diabetes lead to diabetic retinopathy (DR) hence affecting the vision. Computerized methods performed a significant role in DR detection at the initial phase to cure vision loss. Therefore, a method is proposed in this study that consists of three models for localization, segmentation, and classification. A novel technique is designed with the combination of pre-trained ResNet-18 and YOLOv8 models based on the selection of optimum layers for the localization of DR lesions. The localized images are passed to the designed semantic segmentation model on selected layers and trained on optimized learning hyperparameters. The segmentation model performance is evaluated on the Grand-challenge IDRID segmentation dataset. The achieved results are computed in terms of mean IoU 0.95,0.94, 0.96, 0.94, and 0.95 on OD, SoftExs, HardExs, HAE, and MAs respectively. Another classification model is developed in which deep features are derived from the pre-trained Efficientnet-b0 model and optimized using a Genetic algorithm (GA) based on the selected parameters for grading of NPDR lesions. The proposed model achieved greater than 98 % accuracy which is superior to previous methods.
Collapse
Affiliation(s)
- Javaria Amin
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| | - Irum Shazadi
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan, 31538, South Korea
| |
Collapse
|
3
|
Steffi S, Sam Emmanuel WR. Resilient back-propagation machine learning-based classification on fundus images for retinal microaneurysm detection. Int Ophthalmol 2024; 44:91. [PMID: 38367192 DOI: 10.1007/s10792-024-02982-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/29/2023] [Indexed: 02/19/2024]
Abstract
BACKGROUND The timely diagnosis of medical conditions, particularly diabetic retinopathy, relies on the identification of retinal microaneurysms. However, the commonly used retinography method poses a challenge due to the diminutive dimensions and limited differentiation of microaneurysms in images. PROBLEM STATEMENT Automated identification of microaneurysms becomes crucial, necessitating the use of comprehensive ad-hoc processing techniques. Although fluorescein angiography enhances detectability, its invasiveness limits its suitability for routine preventative screening. OBJECTIVE This study proposes a novel approach for detecting retinal microaneurysms using a fundus scan, leveraging circular reference-based shape features (CR-SF) and radial gradient-based texture features (RG-TF). METHODOLOGY The proposed technique involves extracting CR-SF and RG-TF for each candidate microaneurysm, employing a robust back-propagation machine learning method for training. During testing, extracted features from test images are compared with training features to categorize microaneurysm presence. RESULTS The experimental assessment utilized four datasets (MESSIDOR, Diaretdb1, e-ophtha-MA, and ROC), employing various measures. The proposed approach demonstrated high accuracy (98.01%), sensitivity (98.74%), specificity (97.12%), and area under the curve (91.72%). CONCLUSION The presented approach showcases a successful method for detecting retinal microaneurysms using a fundus scan, providing promising accuracy and sensitivity. This non-invasive technique holds potential for effective screening in diabetic retinopathy and other related medical conditions.
Collapse
Affiliation(s)
- S Steffi
- Department of Computer Science, Nesamony Memorial Christian College Affiliated to Manonmaniam Sundaranar University, Abishekapatti, Tirunelveli, Tamil Nadu, 627012, India.
| | - W R Sam Emmanuel
- Department of PG Computer Science, Nesamony Memorial Christian College Affiliated to Manonmaniam Sundaranar University, Abishekapatti, Tirunelveli, Tamil Nadu, 627012, India
| |
Collapse
|
4
|
Soares I, Castelo-Branco M, Pinheiro A. Microaneurysms detection in retinal images using a multi-scale approach. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
5
|
Chandhakanond P, Aimmanee P. Hemorrhage segmentation in mobile-phone retinal images using multiregion contrast enhancement and iterative NICK thresholding region growing. Sci Rep 2022; 12:21513. [PMID: 36513802 PMCID: PMC9747926 DOI: 10.1038/s41598-022-26073-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 12/08/2022] [Indexed: 12/15/2022] Open
Abstract
Hemorrhage segmentation in retinal images is challenging because the sizes and shapes vary for each hemorrhage, the intensity is close to the blood vessels and macula, and the intensity is often nonuniform, especially for large hemorrhages. Hemorrhage segmentation in mobile-phone retinal images is even more challenging because mobile-phone retinal images usually have poorer contrast, more shadows, and uneven illumination compared to those obtained from the table-top ophthalmoscope. In this work, the proposed KMMRC-INRG method enhances the hemorrhage segmentation performance with nonuniform intensity in poor lighting conditions on mobile-phone images. It improves the uneven illumination of mobile-phone retinal images using a proposed method, K-mean multiregion contrast enhancement (KMMRC). It also enhances the boundary segmentation of the hemorrhage blobs using a novel iterative NICK thresholding region growing (INRG) method before applying an SVM classifier based on hue, saturation, and brightness features. This approach can achieve as high as 80.18%, 91.26%, 85.36%, and 80.08% for recall, precision, F1-measure, and IoU, respectively. The F1-measure score improves up to 19.02% compared to a state-of-the-art method DT-HSVE tested on the same full dataset and as much as 58.88% when considering only images with large-size hemorrhages.
Collapse
Affiliation(s)
- Patsaphon Chandhakanond
- grid.412434.40000 0004 1937 1127School of Information, Computer, and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, 131 Moo 5, Tivanont Rd, Bangkadi, Meung, Patumthani, 12000 Thailand
| | - Pakinee Aimmanee
- grid.412434.40000 0004 1937 1127School of Information, Computer, and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, 131 Moo 5, Tivanont Rd, Bangkadi, Meung, Patumthani, 12000 Thailand
| |
Collapse
|
6
|
Yang Y, Shang F, Wu B, Yang D, Wang L, Xu Y, Zhang W, Zhang T. Robust Collaborative Learning of Patch-Level and Image-Level Annotations for Diabetic Retinopathy Grading From Fundus Image. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11407-11417. [PMID: 33961571 DOI: 10.1109/tcyb.2021.3062638] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Diabetic retinopathy (DR) grading from fundus images has attracted increasing interest in both academic and industrial communities. Most convolutional neural network-based algorithms treat DR grading as a classification task via image-level annotations. However, these algorithms have not fully explored the valuable information in the DR-related lesions. In this article, we present a robust framework, which collaboratively utilizes patch-level and image-level annotations, for DR severity grading. By an end-to-end optimization, this framework can bidirectionally exchange the fine-grained lesion and image-level grade information. As a result, it exploits more discriminative features for DR grading. The proposed framework shows better performance than the recent state-of-the-art algorithms and three clinical ophthalmologists with over nine years of experience. By testing on datasets of different distributions (such as label and camera), we prove that our algorithm is robust when facing image quality and distribution variations that commonly exist in real-world practice. We inspect the proposed framework through extensive ablation studies to indicate the effectiveness and necessity of each motivation. The code and some valuable annotations are now publicly available.
Collapse
|
7
|
Zhang X, Peng Z, Meng M, Wu J, Han Y, Zhang Y, Yang J, Zhao Q. ID-NET: Inception deconvolutional neural network for multi-class classification in retinal fundus image. J MECH MED BIOL 2022. [DOI: 10.1142/s0219519422400292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
8
|
Deep Red Lesion Classification for Early Screening of Diabetic Retinopathy. MATHEMATICS 2022. [DOI: 10.3390/math10050686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Diabetic retinopathy (DR) is an asymptotic and vision-threatening complication among working-age adults. To prevent blindness, a deep convolutional neural network (CNN) based diagnosis can help to classify less-discriminative and small-sized red lesions in early screening of DR patients. However, training deep models with minimal data is a challenging task. Fine-tuning through transfer learning is a useful alternative, but performance degradation, overfitting, and domain adaptation issues further demand architectural amendments to effectively train deep models. Various pre-trained CNNs are fine-tuned on an augmented set of image patches. The best-performing ResNet50 model is modified by introducing reinforced skip connections, a global max-pooling layer, and the sum-of-squared-error loss function. The performance of the modified model (DR-ResNet50) on five public datasets is found to be better than state-of-the-art methods in terms of well-known metrics. The highest scores (0.9851, 0.991, 0.991, 0.991, 0.991, 0.9939, 0.0029, 0.9879, and 0.9879) for sensitivity, specificity, AUC, accuracy, precision, F1-score, false-positive rate, Matthews’s correlation coefficient, and kappa coefficient are obtained within a 95% confidence interval for unseen test instances from e-Ophtha_MA. This high sensitivity and low false-positive rate demonstrate the worth of a proposed framework. It is suitable for early screening due to its performance, simplicity, and robustness.
Collapse
|
9
|
Mateen M, Malik TS, Hayat S, Hameed M, Sun S, Wen J. Deep Learning Approach for Automatic Microaneurysms Detection. SENSORS (BASEL, SWITZERLAND) 2022; 22:542. [PMID: 35062506 PMCID: PMC8781897 DOI: 10.3390/s22020542] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 01/04/2022] [Accepted: 01/05/2022] [Indexed: 02/01/2023]
Abstract
In diabetic retinopathy (DR), the early signs that may lead the eyesight towards complete vision loss are considered as microaneurysms (MAs). The shape of these MAs is almost circular, and they have a darkish color and are tiny in size, which means they may be missed by manual analysis of ophthalmologists. In this case, accurate early detection of microaneurysms is helpful to cure DR before non-reversible blindness. In the proposed method, early detection of MAs is performed using a hybrid feature embedding approach of pre-trained CNN models, named as VGG-19 and Inception-v3. The performance of the proposed approach was evaluated using publicly available datasets, namely "E-Ophtha" and "DIARETDB1", and achieved 96% and 94% classification accuracy, respectively. Furthermore, the developed approach outperformed the state-of-the-art approaches in terms of sensitivity and specificity for microaneurysms detection.
Collapse
Affiliation(s)
- Muhammad Mateen
- Department of Computer Science, Air University Multan Campus, Multan 60000, Pakistan; (M.M.); (T.S.M.)
| | - Tauqeer Safdar Malik
- Department of Computer Science, Air University Multan Campus, Multan 60000, Pakistan; (M.M.); (T.S.M.)
| | - Shaukat Hayat
- Department of Computer Science, Iqra National University, Peshawar 25000, Pakistan;
| | - Musab Hameed
- Department of Electrical & Computer Engineering, Sahiwal Campus, COMSATS University Islamabad, Sahiwal 57000, Pakistan;
| | - Song Sun
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China;
| | - Junhao Wen
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China;
| |
Collapse
|
10
|
FEROUI A, MESSADI M, LAZOUNI A, BESSAID A. COMPUTER HYBRID SYSTEM OF HEMORRHAGE (HES) DETECTION USED FOR AIDED DIAGNOSIS OF DIABETIC RETINOPATHY. J MECH MED BIOL 2021. [DOI: 10.1142/s0219519421500445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Diabetes cause’s metabolic and physiological abnormalities in the retina and the changes suggest a role for inflammation in the development of diabetic retinopathy. Abnormal blood vessels can form in the back of the eye of a person with diabetes. These new blood vessels are weaker and prone to breaking and causing hemorrhage (HEs). Diabetic retinopathy (DR) accounts for 31.5–54% of all cases of vitreous hemorrhage in adults in the world. Therefore, detection of HEs is still a challenging factor task for computer-aided diagnostics of DR. Many researchers have developed advanced algorithms of hemorrhages detection using fundus images. In this paper, a robust and computationally efficient approach for HEs with different shape and size detection and classification is presented. First, brightness correction and contrast enhancement are applied to fundus images. Second, candidate hemorrhages are extracted by using an unsupervised classification algorithm. Third, an approach based on mathematical morphology is carried out for vascular network and macula segmentation. Finally, a total of 13 HEs features are considered in this study and selected for classification. The proposed method is evaluated on 419 fundus images of DIARETDB0, DIARETDB1 and MESSIDOR databases. Experimental results show that overall average sensitivity, specificity, predictive value and accuracy for hemorrhage in lesion level are 98.90%, 99.66%, 97.63% and 99.56%, respectively. The results show that the proposed method outperforms other state-of-the-art methods in detection of hemorrhages. These results indicate that this new method may improve the performance of diagnosis of DR system.
Collapse
Affiliation(s)
- A. FEROUI
- Biomedical Laboratory, Department of Electrics and Electronics, Technology Faculty, University of Tlemcen 13000, Algeria
| | - M. MESSADI
- Biomedical Laboratory, Department of Electrics and Electronics, Technology Faculty, University of Tlemcen 13000, Algeria
| | - A. LAZOUNI
- Biomedical Laboratory, Department of Electrics and Electronics, Technology Faculty, University of Tlemcen 13000, Algeria
| | - A. BESSAID
- Biomedical Laboratory, Department of Electrics and Electronics, Technology Faculty, University of Tlemcen 13000, Algeria
| |
Collapse
|
11
|
Multi-Scale Feature Fusion with Adaptive Weighting for Diabetic Retinopathy Severity Classification. ELECTRONICS 2021. [DOI: 10.3390/electronics10121369] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Diabetic retinopathy (DR) is the prime cause of blindness in people who suffer from diabetes. Automation of DR diagnosis could help a lot of patients avoid the risk of blindness by identifying the disease and making judgments at an early stage. The main focus of the present work is to propose a feasible scheme of DR severity level detection under the MobileNetV3 backbone network based on a multi-scale feature of the retinal fundus image and improve the classification performance of the model. Firstly, a special residual attention module RCAM for multi-scale feature extraction from different convolution layers was designed. Then, the feature fusion by an innovative operation of adaptive weighting was carried out in each layer. The corresponding weight of the convolution block is updated in the model training automatically, with further global average pooling (GAP) and division process to avoid over-fitting of the model and removing non-critical features. In addition, Focal Loss is used as a loss function due to the data imbalance of DR images. The experimental results based on Kaggle APTOS 2019 contest dataset show that our proposed method for DR severity classification achieves an accuracy of 85.32%, a kappa statistic of 77.26%, and an AUC of 0.97. The comparison results also indicate that the model obtained is superior to the existing models and presents superior classification performance on the dataset.
Collapse
|
12
|
Hemorrhage Detection Based on 3D CNN Deep Learning Framework and Feature Fusion for Evaluating Retinal Abnormality in Diabetic Patients. SENSORS 2021; 21:s21113865. [PMID: 34205120 PMCID: PMC8199947 DOI: 10.3390/s21113865] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 05/29/2021] [Accepted: 06/01/2021] [Indexed: 01/07/2023]
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new method for accurate hemorrhage detection from the retinal fundus images. First, the proposed method uses the modified contrast enhancement method to improve the edge details from the input retinal fundus images. In the second stage, a new convolutional neural network (CNN) architecture is proposed to detect hemorrhages. A modified pre-trained CNN model is used to extract features from the detected hemorrhages. In the third stage, all extracted feature vectors are fused using the convolutional sparse image decomposition method, and finally, the best features are selected by using the multi-logistic regression controlled entropy variance approach. The proposed method is evaluated on 1509 images from HRF, DRIVE, STARE, MESSIDOR, DIARETDB0, and DIARETDB1 databases and achieves the average accuracy of 97.71%, which is superior to the previous works. Moreover, the proposed hemorrhage detection system attains better performance, in terms of visual quality and quantitative analysis with high accuracy, in comparison with the state-of-the-art methods.
Collapse
|
13
|
Bhardwaj C, Jain S, Sood M. Deep Learning-Based Diabetic Retinopathy Severity Grading System Employing Quadrant Ensemble Model. J Digit Imaging 2021; 34:440-457. [PMID: 33686525 DOI: 10.1007/s10278-021-00418-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 12/23/2020] [Accepted: 01/03/2021] [Indexed: 12/23/2022] Open
Abstract
The diabetic retinopathy accounts in the deterioration of retinal blood vessels leading to a serious compilation affecting the eyes. The automated DR diagnosis frameworks are critically important for the early identification and detection of these eye-related problems, helping the ophthalmic experts in providing the second opinion for effectual treatment. The deep learning techniques have evolved as an improvement over the conventional approaches, which are dependent on the handcrafted feature extraction. To address the issue of proficient DR discrimination, the authors have proposed a quadrant ensemble automated DR grading approach by implementing InceptionResnet-V2 deep neural network framework. The presented model incorporates histogram equalization, optical disc localization, and quadrant cropping along with the data augmentation step for improving the network performance. A superior accuracy performance of 93.33% is observed for the proposed framework, and a significant reduction of 0.325 is noticed in the cross-entropy loss function for MESSIDOR benchmark dataset; however, its validation utilizing the latest IDRiD dataset establishes its generalization ability. The accuracy improvement of 13.58% is observed when the proposed QEIRV-2 model is compared with the classical Inception-V3 CNN model. To justify the viability of the proposed framework, its performance is compared with the existing state-of-the-art approaches and 25.23% of accuracy improvement is observed.
Collapse
Affiliation(s)
- Charu Bhardwaj
- Department of Electronics and Communication Engineering, JUIT Waknaghat, Solan, HP, India.
| | - Shruti Jain
- Department of Electronics and Communication Engineering, JUIT Waknaghat, Solan, HP, India
| | | |
Collapse
|
14
|
Qummar S, Khan FG, Shah S, Khan A, Din A, Gao J. Deep Learning Techniques for Diabetic Retinopathy Detection. Curr Med Imaging 2021; 16:1201-1213. [DOI: 10.2174/1573405616666200213114026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 11/26/2019] [Accepted: 12/19/2019] [Indexed: 11/22/2022]
Abstract
Diabetes occurs due to the excess of glucose in the blood that may affect many organs
of the body. Elevated blood sugar in the body causes many problems including Diabetic Retinopathy
(DR). DR occurs due to the mutilation of the blood vessels in the retina. The manual detection
of DR by ophthalmologists is complicated and time-consuming. Therefore, automatic detection is
required, and recently different machine and deep learning techniques have been applied to detect
and classify DR. In this paper, we conducted a study of the various techniques available in the literature
for the identification/classification of DR, the strengths and weaknesses of available datasets
for each method, and provides the future directions. Moreover, we also discussed the different
steps of detection, that are: segmentation of blood vessels in a retina, detection of lesions, and other
abnormalities of DR.
Collapse
Affiliation(s)
- Sehrish Qummar
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Fiaz Gul Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Sajid Shah
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Ahmad Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Ahmad Din
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Jinfeng Gao
- Department of Information Engineering, Huanghuai University, Henan, China
| |
Collapse
|
15
|
Romero-Oraá R, García M, Oraá-Pérez J, López-Gálvez MI, Hornero R. Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6549. [PMID: 33207825 PMCID: PMC7698181 DOI: 10.3390/s20226549] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/07/2020] [Accepted: 11/13/2020] [Indexed: 06/11/2023]
Abstract
Diabetic retinopathy (DR) is characterized by the presence of red lesions (RLs), such as microaneurysms and hemorrhages, and bright lesions, such as exudates (EXs). Early DR diagnosis is paramount to prevent serious sight damage. Computer-assisted diagnostic systems are based on the detection of those lesions through the analysis of fundus images. In this paper, a novel method is proposed for the automatic detection of RLs and EXs. As the main contribution, the fundus image was decomposed into various layers, including the lesion candidates, the reflective features of the retina, and the choroidal vasculature visible in tigroid retinas. We used a proprietary database containing 564 images, randomly divided into a training set and a test set, and the public database DiaretDB1 to verify the robustness of the algorithm. Lesion detection results were computed per pixel and per image. Using the proprietary database, 88.34% per-image accuracy (ACCi), 91.07% per-pixel positive predictive value (PPVp), and 85.25% per-pixel sensitivity (SEp) were reached for the detection of RLs. Using the public database, 90.16% ACCi, 96.26% PPV_p, and 84.79% SEp were obtained. As for the detection of EXs, 95.41% ACCi, 96.01% PPV_p, and 89.42% SE_p were reached with the proprietary database. Using the public database, 91.80% ACCi, 98.59% PPVp, and 91.65% SEp were obtained. The proposed method could be useful to aid in the diagnosis of DR, reducing the workload of specialists and improving the attention to diabetic patients.
Collapse
Affiliation(s)
- Roberto Romero-Oraá
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - María García
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - Javier Oraá-Pérez
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
| | - María I. López-Gálvez
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, 47003 Valladolid, Spain
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Universidad de Valladolid, 47011 Valladolid, Spain
| | - Roberto Hornero
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
- Instituto de Investigación en Matemáticas (IMUVA), Universidad de Valladolid, 47011 Valladolid, Spain
| |
Collapse
|
16
|
Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction. ELECTRONICS 2020. [DOI: 10.3390/electronics9060914] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features.
Collapse
|
17
|
Long S, Chen J, Hu A, Liu H, Chen Z, Zheng D. Microaneurysms detection in color fundus images using machine learning based on directional local contrast. Biomed Eng Online 2020; 19:21. [PMID: 32295576 PMCID: PMC7161183 DOI: 10.1186/s12938-020-00766-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 04/06/2020] [Indexed: 02/07/2023] Open
Abstract
Background As one of the major
complications of diabetes, diabetic retinopathy (DR) is a leading
cause of visual impairment and blindness due to delayed diagnosis
and intervention. Microaneurysms appear as the earliest symptom of
DR. Accurate and reliable detection of microaneurysms in color
fundus images has great importance for DR screening. Methods A microaneurysms' detection method
using machine learning based on directional local contrast (DLC) is
proposed for the early diagnosis of DR. First, blood vessels were
enhanced and segmented using improved enhancement function based on
analyzing eigenvalues of Hessian matrix. Next, with blood vessels
excluded, microaneurysm candidate regions were obtained using shape
characteristics and connected components analysis. After image
segmented to patches, the features of each microaneurysm candidate
patch were extracted, and each candidate patch was classified into
microaneurysm or non-microaneurysm. The main contributions of our
study are (1) making use of directional local contrast in
microaneurysms' detection for the first time, which does make sense
for better microaneurysms' classification. (2) Applying three
different machine learning techniques for classification and
comparing their performance for microaneurysms' detection. The
proposed algorithm was trained and tested on e-ophtha MA database,
and further tested on another independent DIARETDB1 database.
Results of microaneurysms' detection on the two databases were
evaluated on lesion level and compared with existing algorithms. Results The proposed method has achieved better performance compared with existing algorithms on accuracy and computation time. On e-ophtha MA and DIARETDB1 databases, the area under curve (AUC) of receiver operating characteristic (ROC) curve was 0.87 and 0.86, respectively. The free-response ROC (FROC) score on the two databases was 0.374 and 0.210, respectively. The computation time per image with resolution of 2544×1969, 1400×960 and 1500×1152 is 29 s, 3 s and 2.6 s, respectively. Conclusions The proposed method
using machine learning based on directional local contrast of image
patches can effectively detect microaneurysms in color fundus images
and provide an effective scientific basis for early clinical DR
diagnosis.
Collapse
Affiliation(s)
- Shengchun Long
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Jiali Chen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Ante Hu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Haipeng Liu
- Research Center of Intelligent Healthcare, Faculty of Health and Life Science, Coventry University, Coventry, CV1 5RW, UK
| | - Zhiqing Chen
- Eye Center, The second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Dingchang Zheng
- Research Center of Intelligent Healthcare, Faculty of Health and Life Science, Coventry University, Coventry, CV1 5RW, UK
| |
Collapse
|
18
|
Automated Microaneurysms Detection and Classification using Multilevel Thresholding and Multilayer Perceptron. J Med Biol Eng 2020. [DOI: 10.1007/s40846-020-00509-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
19
|
Munuera‐Gifre E, Saez M, Juvinyà‐Canals D, Rodríguez‐Poncelas A, Barrot‐de‐la–Puente J, Franch‐Nadal J, Romero‐Aroca P, Barceló MA, Coll‐de‐Tuero G. Analysis of the location of retinal lesions in central retinographies of patients with Type 2 diabetes. Acta Ophthalmol 2020; 98:e13-e21. [PMID: 31469507 DOI: 10.1111/aos.14223] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 07/25/2019] [Indexed: 01/17/2023]
Abstract
PURPOSE To describe the distribution of Type 2 DM retinal lesions and determine whether it is symmetrical between the two eyes, is random or follows a certain pattern. METHODS Cross-sectional study of Type 2 DM patients who had been referred for an outpatients' ophthalmology visit for diabetic retinopathy screening in primary health care. Retinal photographic images were taken using central projection non-mydriatic retinography. The lesions under study were microaneurysms/haemorrhages, and hard and soft exudates. The lesions were placed numerically along the x- and y-axes obtained, with the fovea as the origin. RESULTS From among the 94 patients included in the study, 4770 lesions were identified. The retinal lesions were not distributed randomly, but rather followed a determined pattern. The left eye exhibited more microaneurysms/haemorrhages and hard exudates of a greater density in the central retina than was found in the right eye. Furthermore, more cells containing lesions were found in the upper temporal quadrants, (especially in the left eye), and tended to be more central in the left eye than in the right, while the hard exudates were more central than the microaneurysms/haemorrhages. CONCLUSION The distribution of DR lesions is neither homogeneous nor random but rather follows a determined pattern for both microaneurysms/haemorrhages and hard exudates. This distribution means that the areas of the retina most vulnerable to metabolic alteration can be identified. The results may be useful for automated DR detection algorithms and for determining the underlying vascular and non-vascular physiopathological mechanisms that can explain these differences.
Collapse
Affiliation(s)
| | - Marc Saez
- METHARISC Group USR Girona IdIAP Gol i Gorina Girona Spain
- Research Group on Statistics, Econometrics and Health (GRECS) University of Girona Girona Spain
- CIBER of Epidemiology and Public Health (CIBERESP) Madrid Spain
| | | | | | | | | | - Pere Romero‐Aroca
- Ophthalmology Service University Hospital Sant Joan Institut d'Investigació Sanitària Pere Virgili (IISPV) University Rovira i Virgili Reus Spain
| | - Maria Antonia Barceló
- METHARISC Group USR Girona IdIAP Gol i Gorina Girona Spain
- Research Group on Statistics, Econometrics and Health (GRECS) University of Girona Girona Spain
- CIBER of Epidemiology and Public Health (CIBERESP) Madrid Spain
| | - Gabriel Coll‐de‐Tuero
- METHARISC Group USR Girona IdIAP Gol i Gorina Girona Spain
- CIBER of Epidemiology and Public Health (CIBERESP) Madrid Spain
- Department of Medical Sciences University of Girona Girona Spain
| |
Collapse
|
20
|
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, Mériaudeau F. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge. Med Image Anal 2019; 59:101561. [PMID: 31671320 DOI: 10.1016/j.media.2019.101561] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 09/09/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023]
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Collapse
Affiliation(s)
- Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA.
| | - Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | | | | | | | - Lihong Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - Xinhui Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - TianBo Wu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | | | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Linsheng He
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Yoon Ho Choi
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Yeong Chan Lee
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang-Hyuk Jung
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, China
| | - Junyan Wu
- Cleerly Inc., New York, United States
| | | | - Ting Zhou
- University at Buffalo, New York, United States
| | - Janos Toth
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Agnes Baran
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | | | | | | | | | - Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China; Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore
| | - Li Cheng
- Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore; Department of Electric and Computer Engineering, University of Alberta, Canada
| | - Qinhao Chu
- School of Computing, National University of Singapore, Singapore
| | - Pengcheng Li
- School of Computing, National University of Singapore, Singapore
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd., China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yaxin Shen
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | | | | | - Tânia Melo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Balazs Harangi
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, USA
| | | | - Andras Hajdu
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, China
| | - Ana Maria Mendonça
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Luca Giancardo
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | | | - Fabrice Mériaudeau
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Malaysia; ImViA/IFTIM, Université de Bourgogne, Dijon, France
| |
Collapse
|
21
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
22
|
Joshi S, Karule PT. Mathematical morphology for microaneurysm detection in fundus images. Eur J Ophthalmol 2019; 30:1135-1142. [DOI: 10.1177/1120672119843021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Aim: Fundus image analysis is the basis for the better understanding of retinal diseases which are found due to diabetes. Detection of earlier markers such as microaneurysms that appear in fundus images combined with treatment proves beneficial to prevent further complications of diabetic retinopathy with an increased risk of sight loss. Methods: The proposed algorithm consists of three modules: (1) image enhancement through morphological processing; (2) the extraction and removal of red structures, such as blood vessels preceded by detection and removal of bright artefacts; (3) finally, the true microaneurysm candidate selection among other structures based on feature extraction set. Results: The proposed strategy is successfully evaluated on two publicly available databases containing both normal and pathological images. The sensitivity of 89.22%, specificity of 91% and accuracy of 92% achieved for the detection of microaneurysms for Diaretdb1 database images. The algorithm evaluation for microaneurysm detection has a sensitivity of 83% and specificity 82% for e-ophtha database. Conclusion: In automated detection system, the successful detection of the number of microaneurysms correlates with the stages of the retinal diseases and its early diagnosis. The results for true microaneurysm detection indicates it as a useful tool for screening colour fundus images, which proves time saving for counting of microaneurysms to follow Diabetic Retinopathy Grading Criteria.
Collapse
Affiliation(s)
- Shilpa Joshi
- Department of Electronics Engineering, YCCE, Nagpur University, Nagpur, India
| | - PT Karule
- Department of Electronics Engineering, YCCE, Nagpur University, Nagpur, India
| |
Collapse
|
23
|
Tasgaonkar M, Khambete M. Red Profile Moments for Hemorrhage Classification in Diabetic Retinal Fundus Images. PATTERN RECOGNITION AND IMAGE ANALYSIS 2019. [DOI: 10.1134/s1054661819020093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
24
|
Kou C, Li W, Liang W, Yu Z, Hao J. Microaneurysms segmentation with a U-Net based on recurrent residual convolutional neural network. J Med Imaging (Bellingham) 2019; 6:025008. [PMID: 31259200 PMCID: PMC6582229 DOI: 10.1117/1.jmi.6.2.025008] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2018] [Accepted: 05/31/2019] [Indexed: 12/23/2022] Open
Abstract
Microaneurysms (MAs) play an important role in the diagnosis of clinical diabetic retinopathy at the early stage. Annotation of MAs manually by experts is laborious and so it is essential to develop automatic segmentation methods. Automatic MA segmentation remains a challenging task mainly due to the low local contrast of the image and the small size of MAs. A deep learning-based method called U-Net has become one of the most popular methods for the medical image segmentation task. We propose an architecture for U-Net, named deep recurrent U-Net (DRU-Net), obtained by combining the deep residual model and recurrent convolutional operations into U-Net. In the MA segmentation task, DRU-Net can accumulate effective features much better than the typical U-Net. The proposed method is evaluated on two publicly available datasets: E-Ophtha and IDRiD. Our results show that the proposed DRU-Net achieves the best performance with 0.9999 accuracy value and 0.9943 area under curve (AUC) value on the E-Ophtha dataset. And on the IDRiD dataset, it has achieved 0.987 AUC value (to our knowledge, this is the first result of segmenting MAs on this dataset). Compared with other methods, such as U-Net, FCNN, and ResU-Net, our architecture (DRU-Net) achieves state-of-the-art performance.
Collapse
Affiliation(s)
- Caixia Kou
- Beijing University of Posts and Telecommunications, Haidian District, Beijing, China
| | - Wei Li
- Beijing University of Posts and Telecommunications, Haidian District, Beijing, China
| | - Wei Liang
- Beijing University of Posts and Telecommunications, Haidian District, Beijing, China
| | - Zekuan Yu
- Peking University, Haidian District, Beijing, China
| | - Jianchen Hao
- Peking University First Hospital, Xicheng District, Beijing, China
| |
Collapse
|
25
|
Abstract
Automated medical image analysis is an emerging field of research that identifies the disease with the help of imaging technology. Diabetic retinopathy (DR) is a retinal disease that is diagnosed in diabetic patients. Deep neural network (DNN) is widely used to classify diabetic retinopathy from fundus images collected from suspected persons. The proposed DR classification system achieves a symmetrically optimized solution through the combination of a Gaussian mixture model (GMM), visual geometry group network (VGGNet), singular value decomposition (SVD) and principle component analysis (PCA), and softmax, for region segmentation, high dimensional feature extraction, feature selection and fundus image classification, respectively. The experiments were performed using a standard KAGGLE dataset containing 35,126 images. The proposed VGG-19 DNN based DR model outperformed the AlexNet and spatial invariant feature transform (SIFT) in terms of classification accuracy and computational time. Utilization of PCA and SVD feature selection with fully connected (FC) layers demonstrated the classification accuracies of 92.21%, 98.34%, 97.96%, and 98.13% for FC7-PCA, FC7-SVD, FC8-PCA, and FC8-SVD, respectively.
Collapse
|
26
|
Diabetic retinopathy techniques in retinal images: A review. Artif Intell Med 2018; 97:168-188. [PMID: 30448367 DOI: 10.1016/j.artmed.2018.10.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 10/08/2018] [Accepted: 10/24/2018] [Indexed: 12/23/2022]
Abstract
The diabetic retinopathy is the main reason of vision loss in people. Medical experts recognize some clinical, geometrical and haemodynamic features of diabetic retinopathy. These features include the blood vessel area, exudates, microaneurysm, hemorrhages and neovascularization, etc. In Computer Aided Diagnosis (CAD) systems, these features are detected in fundus images using computer vision techniques. In this paper, we review the methods of low, middle and high level vision for automatic detection and classification of diabetic retinopathy.We give a detailed review of 79 algorithms for detecting different features of diabetic retinopathy during the last eight years.
Collapse
|
27
|
Biyani R, Patre B. Algorithms for red lesion detection in Diabetic Retinopathy: A review. Biomed Pharmacother 2018; 107:681-688. [DOI: 10.1016/j.biopha.2018.07.175] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 07/31/2018] [Accepted: 07/31/2018] [Indexed: 11/27/2022] Open
|
28
|
Elloumi Y, Akil M, Kehtarnavaz N. A mobile computer aided system for optic nerve head detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 162:139-148. [PMID: 29903480 DOI: 10.1016/j.cmpb.2018.05.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Revised: 04/17/2018] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE The detection of optic nerve head (ONH) in retinal fundus images plays a key role in identifying Diabetic Retinopathy (DR) as well as other abnormal conditions in eye examinations. This paper presents a method and its associated software towards the development of an Android smartphone app based on a previously developed ONH detection algorithm. The development of this app and the use of the d-Eye lens which can be snapped onto a smartphone provide a mobile and cost-effective computer-aided diagnosis (CAD) system in ophthalmology. In particular, this CAD system would allow eye examination to be conducted in remote locations with limited access to clinical facilities. METHODS A pre-processing step is first carried out to enable the ONH detection on the smartphone platform. Then, the optimization steps taken to run the algorithm in a computationally and memory efficient manner on the smartphone platform is discussed. RESULTS The smartphone code of the ONH detection algorithm was applied to the STARE and DRIVE databases resulting in about 96% and 100% detection rates, respectively, with an average execution time of about 2 s and 1.3 s. In addition, two other databases captured by the d-Eye and iExaminer snap-on lenses for smartphones were considered resulting in about 93% and 91% detection rates, respectively, with an average execution time of about 2.7 s and 2.2 s, respectively.
Collapse
Affiliation(s)
- Yaroub Elloumi
- Gaspard Monge Computer Science Laboratory, ESIEE-Paris, University Paris-Est Marne-la-Vallée, France; Medical Technology and Image Processing Laboratory, Faculty of medicine, University of Monastir, Tunisia.
| | - Mohamed Akil
- Gaspard Monge Computer Science Laboratory, ESIEE-Paris, University Paris-Est Marne-la-Vallée, France
| | - Nasser Kehtarnavaz
- Department of Electrical and Computer Engineering, University of Texas at Dallas, Richardson, TX 75080, USA
| |
Collapse
|
29
|
Chudzik P, Majumdar S, Calivá F, Al-Diri B, Hunter A. Microaneurysm detection using fully convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:185-192. [PMID: 29544784 DOI: 10.1016/j.cmpb.2018.02.016] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Revised: 01/18/2018] [Accepted: 02/22/2018] [Indexed: 05/11/2023]
Abstract
BACKROUND AND OBJECTIVES Diabetic retinopathy is a microvascular complication of diabetes that can lead to sight loss if treated not early enough. Microaneurysms are the earliest clinical signs of diabetic retinopathy. This paper presents an automatic method for detecting microaneurysms in fundus photographies. METHODS A novel patch-based fully convolutional neural network with batch normalization layers and Dice loss function is proposed. Compared to other methods that require up to five processing stages, it requires only three. Furthermore, to the best of the authors' knowledge, this is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain. RESULTS The proposed method was evaluated using three publicly available and widely used datasets: E-Ophtha, DIARETDB1, and ROC. It achieved better results than state-of-the-art methods using the FROC metric. The proposed algorithm accomplished highest sensitivities for low false positive rates, which is particularly important for screening purposes. CONCLUSIONS Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications.
Collapse
Affiliation(s)
- Piotr Chudzik
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK.
| | - Somshubra Majumdar
- Department of Computer Science, University of Illinois, Chicago, IL 60607, USA
| | - Francesco Calivá
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
| | - Bashir Al-Diri
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
| | - Andrew Hunter
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
| |
Collapse
|
30
|
Lesion Detection and Grading of Diabetic Retinopathy via Two-Stages Deep Convolutional Neural Networks. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION − MICCAI 2017 2017. [DOI: 10.1007/978-3-319-66179-7_61] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|