1
|
Dubey S, Dixit M. Recent developments on computer aided systems for diagnosis of diabetic retinopathy: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:14471-14525. [PMID: 36185322 PMCID: PMC9510498 DOI: 10.1007/s11042-022-13841-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/27/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Diabetes is a long-term condition in which the pancreas quits producing insulin or the body's insulin isn't utilised properly. One of the signs of diabetes is Diabetic Retinopathy. Diabetic retinopathy is the most prevalent type of diabetes, if remains unaddressed, diabetic retinopathy can affect all diabetics and become very serious, raising the chances of blindness. It is a chronic systemic condition that affects up to 80% of patients for more than ten years. Many researchers believe that if diabetes individuals are diagnosed early enough, they can be rescued from the condition in 90% of cases. Diabetes damages the capillaries, which are microscopic blood vessels in the retina. On images, blood vessel damage is usually noticeable. Therefore, in this study, several traditional, as well as deep learning-based approaches, are reviewed for the classification and detection of this particular diabetic-based eye disease known as diabetic retinopathy, and also the advantage of one approach over the other is also described. Along with the approaches, the dataset and the evaluation metrics useful for DR detection and classification are also discussed. The main finding of this study is to aware researchers about the different challenges occurs while detecting diabetic retinopathy using computer vision, deep learning techniques. Therefore, a purpose of this review paper is to sum up all the major aspects while detecting DR like lesion identification, classification and segmentation, security attacks on the deep learning models, proper categorization of datasets and evaluation metrics. As deep learning models are quite expensive and more prone to security attacks thus, in future it is advisable to develop a refined, reliable and robust model which overcomes all these aspects which are commonly found while designing deep learning models.
Collapse
Affiliation(s)
- Shradha Dubey
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| | - Manish Dixit
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| |
Collapse
|
2
|
AI-Based Automatic Detection and Classification of Diabetic Retinopathy Using U-Net and Deep Learning. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071427] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Artificial intelligence is widely applied to automate Diabetic retinopathy diagnosis. Diabetes-related retinal vascular disease is one of the world’s most common leading causes of blindness and vision impairment. Therefore, automated DR detection systems would greatly benefit the early screening and treatment of DR and prevent vision loss caused by it. Researchers have proposed several systems to detect abnormalities in retinal images in the past few years. However, Diabetic Retinopathy automatic detection methods have traditionally been based on hand-crafted feature extraction from the retinal images and using a classifier to obtain the final classification. DNN (Deep neural networks) have made several changes in the previous few years to assist overcome the problem mentioned above. We suggested a two-stage novel approach for automated DR classification in this research. Due to the low fraction of positive instances in the asymmetric Optic Disk (OD) and blood vessels (BV) detection system, preprocessing and data augmentation techniques are used to enhance the image quality and quantity. The first step uses two independent U-Net models for OD (optic disc) and BV (blood vessel) segmentation. In the second stage, the symmetric hybrid CNN-SVD model was created after preprocessing to extract and choose the most discriminant features following OD and BV extraction using Inception-V3 based on transfer learning, and detects DR by recognizing retinal biomarkers such as MA (microaneurysms), HM (hemorrhages), and exudates (EX). On EyePACS-1, Messidor-2, and DIARETDB0, the proposed methodology demonstrated state-of-the-art performance, with an average accuracy of 97.92%, 94.59%, and 93.52%, respectively. Extensive testing and comparisons with baseline approaches indicate the efficacy of the suggested methodology.
Collapse
|
3
|
Red-lesion extraction in retinal fundus images by directional intensity changes' analysis. Sci Rep 2021; 11:18223. [PMID: 34521886 PMCID: PMC8440775 DOI: 10.1038/s41598-021-97649-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Accepted: 08/18/2021] [Indexed: 12/31/2022] Open
Abstract
Diabetic retinopathy (DR) is an important retinal disease threatening people with the long diabetic history. Blood leakage in retina leads to the formation of red lesions in retina the analysis of which is helpful in the determination of severity of disease. In this paper, a novel red-lesion extraction method is proposed. The new method firstly determines the boundary pixels of blood vessel and red lesions. Then, it determines the distinguishing features of boundary pixels of red-lesions to discriminate them from other boundary pixels. The main point utilized here is that a red lesion can be observed as significant intensity changes in almost all directions in the fundus image. This can be feasible through considering special neighborhood windows around the extracted boundary pixels. The performance of the proposed method has been evaluated for three different datasets including Diaretdb0, Diaretdb1 and Kaggle datasets. It is shown that the method is capable of providing the values of 0.87 and 0.88 for sensitivity and specificity of Diaretdb1, 0.89 and 0.9 for sensitivity and specificity of Diaretdb0, 0.82 and 0.9 for sensitivity and specificity of Kaggle. Also, the proposed method has a time-efficient performance in the red-lesion extraction process.
Collapse
|
4
|
Yang L, Yan S, Xie Y. Detection of microaneurysms and hemorrhages based on improved Hessian matrix. Int J Comput Assist Radiol Surg 2021; 16:883-894. [PMID: 33978894 DOI: 10.1007/s11548-021-02358-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 03/23/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Knowing the early lesion detection of fundus images is very important to prevent blindness, and accurate lesion segmentation can provide doctors with diagnostic evidence. This study proposes a method based on improved Hessian matrix eigenvalue analysis to detect microaneurysms and hemorrhages in the fundus images of diabetic patients. METHODS A two-step method including identification of lesion candidate regions and classification of candidate regions is adopted. In the first step, the method of eigenvalue analysis based on the improved hessian matrix was applied to enhance the image preprocessed. A dual-threshold method was used for segmentation. Then, blood vessels were gradually removed to obtain the lesion candidate regions. In the second step, all candidates were classified into three categories: microaneurysms, hemorrhages and the others. RESULTS The proposed method has achieved a better performance compared with the existing algorithms on accuracy rates. The classification accuracy rates of microaneurysms and hemorrhages obtained by using our method were 94.4% and 94.0%, respectively, while the classification accuracy rates obtained by using Frangi's filter based on the Hessian matrix to enhance the image were 90.9% and 92.1%. CONCLUSION This study demonstrated a methodology for enhancing images by using eigenvalue analysis based on the improved Hessian matrix and segmentation by using double thresholds. The proposed method is beneficial to improve the detection accuracy of microaneurysms and hemorrhages in fundus images.
Collapse
Affiliation(s)
- Linying Yang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Shiju Yan
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Yuanzhi Xie
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| |
Collapse
|
5
|
Zhang L, Feng S, Duan G, Li Y, Liu G. Detection of Microaneurysms in Fundus Images Based on an Attention Mechanism. Genes (Basel) 2019; 10:genes10100817. [PMID: 31627420 PMCID: PMC6827155 DOI: 10.3390/genes10100817] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 10/02/2019] [Accepted: 10/14/2019] [Indexed: 11/29/2022] Open
Abstract
Microaneurysms (MAs) are the earliest detectable diabetic retinopathy (DR) lesions. Thus, the ability to automatically detect MAs is critical for the early diagnosis of DR. However, achieving the accurate and reliable detection of MAs remains a significant challenge due to the size and complexity of retinal fundus images. Therefore, this paper presents a novel MA detection method based on a deep neural network with a multilayer attention mechanism for retinal fundus images. First, a series of equalization operations are performed to improve the quality of the fundus images. Then, based on the attention mechanism, multiple feature layers with obvious target features are fused to achieve preliminary MA detection. Finally, the spatial relationships between MAs and blood vessels are utilized to perform a secondary screening of the preliminary test results to obtain the final MA detection results. We evaluated the method on the IDRiD_VOC dataset, which was collected from the open IDRiD dataset. The results show that our method effectively improves the average accuracy and sensitivity of MA detection.
Collapse
Affiliation(s)
- Lizong Zhang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Shuxin Feng
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Guiduo Duan
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Ying Li
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Guisong Liu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
- School of Computer Science, Zhongshan Institute, University of Electronic Science and Technology of China, Zhongshan 528400, China.
| |
Collapse
|
6
|
Sadda P, Imamoglu M, Dombrowski M, Papademetris X, Bahtiyar MO, Onofrey J. Deep-learned placental vessel segmentation for intraoperative video enhancement in fetoscopic surgery. Int J Comput Assist Radiol Surg 2018; 14:227-235. [PMID: 30484115 DOI: 10.1007/s11548-018-1886-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 11/06/2018] [Indexed: 12/26/2022]
Abstract
INTRODUCTION Twin-to-twin transfusion syndrome (TTTS) is a potentially lethal condition that affects pregnancies in which twins share a single placenta. The definitive treatment for TTTS is fetoscopic laser photocoagulation, a procedure in which placental blood vessels are selectively cauterized. Challenges in this procedure include difficulty in quickly identifying placental blood vessels due to the many artifacts in the endoscopic video that the surgeon uses for navigation. We propose using deep-learned segmentations of blood vessels to create masks that can be recombined with the original fetoscopic video frame in such a way that the location of placental blood vessels is discernable at a glance. METHODS In a process approved by an institutional review board, intraoperative videos were acquired from ten fetoscopic laser photocoagulation surgeries performed at Yale New Haven Hospital. A total of 345 video frames were selected from these videos at regularly spaced time intervals. The video frames were segmented once by an expert human rater (a clinician) and once by a novice, but trained human rater (an undergraduate student). The segmentations were used to train a fully convolutional neural network of 25 layers. RESULTS The neural network was able to produce segmentations with a high similarity to ground truth segmentations produced by an expert human rater (sensitivity = 92.15% ± 10.69%) and produced segmentations that were significantly more accurate than those produced by a novice human rater (sensitivity = 56.87% ± 21.64%; p < 0.01). CONCLUSION A convolutional neural network can be trained to segment placental blood vessels with near-human accuracy and can exceed the accuracy of novice human raters. Recombining these segmentations with the original fetoscopic video frames can produced enhanced frames in which blood vessels are easily detectable. This has significant implications for aiding fetoscopic surgeons-especially trainees who are not yet at an expert level.
Collapse
Affiliation(s)
| | - Metehan Imamoglu
- Yale University School of Medicine, New Haven, USA.,Department of Obstetrics and Gynecology, Yale University School of Medicine, New Haven, USA.,Yale Fetal Care Center, New Haven, USA
| | - Michael Dombrowski
- Yale University School of Medicine, New Haven, USA.,Department of Obstetrics and Gynecology, Yale University School of Medicine, New Haven, USA.,Yale Fetal Care Center, New Haven, USA
| | - Xenophon Papademetris
- Yale University School of Medicine, New Haven, USA.,Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, USA.,Department of Biomedical Engineering, Yale University School of Medicine, New Haven, USA
| | - Mert O Bahtiyar
- Yale University School of Medicine, New Haven, USA.,Department of Obstetrics and Gynecology, Yale University School of Medicine, New Haven, USA.,Yale Fetal Care Center, New Haven, USA
| | - John Onofrey
- Yale University School of Medicine, New Haven, USA.,Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, USA
| |
Collapse
|
7
|
Jackson RC, Yuan R, Chow DL, Newman W, Çavuşoğlu MC. Real-Time Visual Tracking of Dynamic Surgical Suture Threads. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING : A PUBLICATION OF THE IEEE ROBOTICS AND AUTOMATION SOCIETY 2018; 15:1078-1090. [PMID: 29988978 PMCID: PMC6034738 DOI: 10.1109/tase.2017.2726689] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In order to realize many of the potential benefits associated with robotically assisted minimally invasive surgery, the robot must be more than a remote controlled device. Currently, using a surgical robot can be challenging, fatiguing, and time consuming. Teaching the robot to actively assist surgical tasks, such as suturing, has the potential to vastly improve both patient outlook and the surgeon's efficiency. One obstacle to completing surgical sutures autonomously is the difficulty in tracking surgical suture threads. This paper presents novel stereo image processing algorithms for the detection, initialization, and tracking of a surgical suture thread. A Non Uniform Rational B-Spline (NURBS) curve is used to model a thin, deformable, and dynamic length thread. The NURBS model is initialized and grown from a single selected point located on the thread. The NURBS curve is optimized by minimizing the image matching energy between the projected stereo NURBS image and the segmented thread image. The algorithms are evaluated using suture threads, a calibrated test pattern, and a simulated thread image. Additionally, the accuracy of the algorithms presented are validated as they track a suture thread undergoing translation, deformation, and apparent length changes. All of the tracking is in real-time. Note to Practioners: Abstract-The problem of tracking a surgical suture thread was addressed in this work. Since the suture thread is highly deformable, any tracking algorithm must be robust to intersections, occlusions, knot tying, and length changes. The detection algorithm introduced in this paper is capable of distinguishing different threads when they intersect. The tracking algorithm presented here demonstrate that it is possible, using polynomial curves, to track a suture thread as it deforms, becomes occluded, changes length, and even ties a knot in real time. The detection algorithm can enhance directional thin features while the polynomial curve modeling can track any string like structure. Further integration of the polynomial curve with a feed-forward thread model could improve the stability and robustness of the thread tracking.
Collapse
Affiliation(s)
- Russell C Jackson
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| | - Rick Yuan
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| | - Der-Lin Chow
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| | - Wyatt Newman
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| | - M Cenk Çavuşoğlu
- Department of Electrical Engineering and Computer Science (EECS) at Case Western Reserve University in Cleveland, OH, USA
| |
Collapse
|
8
|
Roychowdhury S, Koozekanani DD, Parhi KK. Automated detection of neovascularization for proliferative diabetic retinopathy screening. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:1300-1303. [PMID: 28268564 DOI: 10.1109/embc.2016.7590945] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Neovascularization is the primary manifestation of proliferative diabetic retinopathy (PDR) that can lead to acquired blindness. This paper presents a novel method that classifies neovascularizations in the 1-optic disc (OD) diameter region (NVD) and elsewhere (NVE) separately to achieve low false positive rates of neovascularization classification. First, the OD region and blood vessels are extracted. Next, the major blood vessel segments in the 1-OD diameter region are classified for NVD, and minor blood vessel segments elsewhere are classified for NVE. For NVD and NVE classifications, optimal region-based feature sets of 10 and 6 features, respectively, are used. The proposed method achieves classification sensitivity, specificity and accuracy for NVD and NVE of 74%, 98.2%, 87.6%, and 61%, 97.5%, 92.1%, respectively. Also, the proposed method achieves 86.4% sensitivity and 76% specificity for screening images with PDR from public and local data sets. Thus, the proposed NVD and NVE detection methods can play a key role in automated screening and prioritization of patients with diabetic retinopathy.
Collapse
|