1
|
De la Torre K, Min S, Lee H, Kang D. The Application of Preventive Medicine in the Future Digital Health Era. J Med Internet Res 2025; 27:e59165. [PMID: 40053712 PMCID: PMC11907169 DOI: 10.2196/59165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 06/09/2024] [Accepted: 11/05/2024] [Indexed: 03/09/2025] Open
Abstract
A number of seismic shifts are expected to reshape the future of medicine. The global population is rapidly aging, significantly impacting the global disease burden. Medicine is undergoing a paradigm shift, defining and diagnosing diseases at earlier stages and shifting the health care focus from treating diseases to preventing them. The application and purview of digital medicine are expected to broaden significantly. Furthermore, the COVID-19 pandemic has further accelerated the shift toward predictive, preventive, personalized, and participatory (P4) medicine, and has identified health care accessibility, affordability, and patient empowerment as core values in the future digital health era. This "left shift" toward preventive care is anticipated to redefine health care, emphasizing health promotion over disease treatment. In the future, the traditional triad of preventive medicine-primary, secondary, and tertiary prevention-will be realized with technologies such as genomics, artificial intelligence, bioengineering and wearable devices, and telemedicine. Breast cancer and diabetes serve as case studies to demonstrate how these technologies such as personalized risk assessment, artificial intelligence-assisted and app-based technologies, have been developed and commercialized to provide personalized preventive care, identifying those at a higher risk and providing instructions and interventions for healthier lifestyles and improved quality of life. Overall, preventive medicine and the use of advanced technology will hold great potential for improving health care outcomes in the future.
Collapse
Affiliation(s)
- Katherine De la Torre
- Department of Biomedical Sciences, Seoul National University Graduate School, Seoul, Republic of Korea
- Department of Preventive Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sukhong Min
- Department of Preventive Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hyobin Lee
- Department of Preventive Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Republic of Korea
| | - Daehee Kang
- Department of Preventive Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Republic of Korea
| |
Collapse
|
2
|
Bressler I, Aviv R, Margalit D, Rom Y, Ianchulev T, Dvey-Aharon Z. Autonomous screening for laser photocoagulation in fundus images using deep learning. Br J Ophthalmol 2024; 108:742-746. [PMID: 37217293 PMCID: PMC11137462 DOI: 10.1136/bjo-2023-323376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 04/15/2023] [Indexed: 05/24/2023]
Abstract
BACKGROUND Diabetic retinopathy (DR) is a leading cause of blindness in adults worldwide. Artificial intelligence (AI) with autonomous deep learning algorithms has been increasingly used in retinal image analysis, particularly for the screening of referrable DR. An established treatment for proliferative DR is panretinal or focal laser photocoagulation. Training autonomous models to discern laser patterns can be important in disease management and follow-up. METHODS A deep learning model was trained for laser treatment detection using the EyePACs dataset. Data was randomly assigned, by participant, into development (n=18 945) and validation (n=2105) sets. Analysis was conducted at the single image, eye, and patient levels. The model was then used to filter input for three independent AI models for retinal indications; changes in model efficacy were measured using area under the receiver operating characteristic curve (AUC) and mean absolute error (MAE). RESULTS On the task of laser photocoagulation detection: AUCs of 0.981, 0.95, and 0.979 were achieved at the patient, image, and eye levels, respectively. When analysing independent models, efficacy was shown to improve across the board after filtering. Diabetic macular oedema detection on images with artefacts was AUC 0.932 vs AUC 0.955 on those without. Participant sex detection on images with artefacts was AUC 0.872 vs AUC 0.922 on those without. Participant age detection on images with artefacts was MAE 5.33 vs MAE 3.81 on those without. CONCLUSION The proposed model for laser treatment detection achieved high performance on all analysis metrics and has been demonstrated to positively affect the efficacy of different AI models, suggesting that laser detection can generally improve AI-powered applications for fundus images.
Collapse
Affiliation(s)
| | | | | | - Yovel Rom
- AEYE Health, New York, New York, USA
| | - Tsontcho Ianchulev
- AEYE Health, New York, New York, USA
- Ophthalmology, New York Eye and Ear Infirmary of Mount Sinai, New York, New York, USA
| | | |
Collapse
|
3
|
Bahr T, Vu TA, Tuttle JJ, Iezzi R. Deep Learning and Machine Learning Algorithms for Retinal Image Analysis in Neurodegenerative Disease: Systematic Review of Datasets and Models. Transl Vis Sci Technol 2024; 13:16. [PMID: 38381447 PMCID: PMC10893898 DOI: 10.1167/tvst.13.2.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/26/2023] [Indexed: 02/22/2024] Open
Abstract
Purpose Retinal images contain rich biomarker information for neurodegenerative disease. Recently, deep learning models have been used for automated neurodegenerative disease diagnosis and risk prediction using retinal images with good results. Methods In this review, we systematically report studies with datasets of retinal images from patients with neurodegenerative diseases, including Alzheimer's disease, Huntington's disease, Parkinson's disease, amyotrophic lateral sclerosis, and others. We also review and characterize the models in the current literature which have been used for classification, regression, or segmentation problems using retinal images in patients with neurodegenerative diseases. Results Our review found several existing datasets and models with various imaging modalities primarily in patients with Alzheimer's disease, with most datasets on the order of tens to a few hundred images. We found limited data available for the other neurodegenerative diseases. Although cross-sectional imaging data for Alzheimer's disease is becoming more abundant, datasets with longitudinal imaging of any disease are lacking. Conclusions The use of bilateral and multimodal imaging together with metadata seems to improve model performance, thus multimodal bilateral image datasets with patient metadata are needed. We identified several deep learning tools that have been useful in this context including feature extraction algorithms specifically for retinal images, retinal image preprocessing techniques, transfer learning, feature fusion, and attention mapping. Importantly, we also consider the limitations common to these models in real-world clinical applications. Translational Relevance This systematic review evaluates the deep learning models and retinal features relevant in the evaluation of retinal images of patients with neurodegenerative disease.
Collapse
Affiliation(s)
- Tyler Bahr
- Mayo Clinic, Department of Ophthalmology, Rochester, MN, USA
| | - Truong A. Vu
- University of the Incarnate Word, School of Osteopathic Medicine, San Antonio, TX, USA
| | - Jared J. Tuttle
- University of Texas Health Science Center at San Antonio, Joe R. and Teresa Lozano Long School of Medicine, San Antonio, TX, USA
| | - Raymond Iezzi
- Mayo Clinic, Department of Ophthalmology, Rochester, MN, USA
| |
Collapse
|
4
|
Lin CH, Lukas BE, Rajabi-Estarabadi A, May JR, Pang Y, Puyana C, Tsoukas M, Avanaki K. Rapid measurement of epidermal thickness in OCT images of skin. Sci Rep 2024; 14:2230. [PMID: 38278852 PMCID: PMC10817904 DOI: 10.1038/s41598-023-47051-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 11/08/2023] [Indexed: 01/28/2024] Open
Abstract
Epidermal thickness (ET) changes are associated with several skin diseases. To measure ET, segmentation of optical coherence tomography (OCT) images is essential; manual segmentation is very time-consuming and requires training and some understanding of how to interpret OCT images. Fast results are important in order to analyze ET over different regions of skin in rapid succession to complete a clinical examination and enable the physician to discuss results with the patient in real time. The well-known CNN-graph search (CNN-GS) methodology delivers highly accurate results, but at a high computational cost. Our objective was to build a computational core, based on CNN-GS, able to accurately segment OCT skin images in real time. We accomplished this by fine-tuning the hyperparameters, testing a range of speed-up algorithms including pruning and quantization, designing a novel pixel-skipping process, and implementing the final product with efficient use of core and threads on a multicore central processing unit (CPU). We name this product CNN-GS-skin. The method identifies two defined boundaries on OCT skin images in order to measure ET. We applied CNN-GS-skin to OCT skin images, taken from various body sites of 63 healthy individuals. Compared with CNN-GS, our described method reduced computation time by 130 [Formula: see text] with minimal reduction in ET determination accuracy (from 96.38 to 94.67%).
Collapse
Affiliation(s)
- Chieh-Hsi Lin
- Department of Computer Science, University of Illinois at Chicago, Chicago, IL, 60607, USA
| | - Brandon E Lukas
- Richard and Loan Hill Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, 60607, USA
| | - Ali Rajabi-Estarabadi
- Dr. Phillip Frost Department of Dermatology and Cutaneous Surgery, University of Miami Miller School of Medicine, Miami, FL, 33136, USA
- Department of Dermatology, Broward Health Medical Center, Fort Lauderdale, FL, USA
| | - Julia Rome May
- University of Illinois College of Medicine, Chicago, IL, 60607, USA
| | - Yanzhen Pang
- University of Illinois College of Medicine, Chicago, IL, 60607, USA
| | - Carolina Puyana
- Department of Dermatology, University of Illinois at Chicago, Chicago, IL, 60607, USA
| | - Maria Tsoukas
- Department of Dermatology, University of Illinois at Chicago, Chicago, IL, 60607, USA
| | - Kamran Avanaki
- Richard and Loan Hill Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, 60607, USA.
- Department of Dermatology, University of Illinois at Chicago, Chicago, IL, 60607, USA.
| |
Collapse
|
5
|
Huang ST, Liu LR, Chiu HW, Huang MY, Tsai MF. Deep convolutional neural network for rib fracture recognition on chest radiographs. Front Med (Lausanne) 2023; 10:1178798. [PMID: 37593404 PMCID: PMC10427862 DOI: 10.3389/fmed.2023.1178798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 07/17/2023] [Indexed: 08/19/2023] Open
Abstract
Introduction Rib fractures are a prevalent injury among trauma patients, and accurate and timely diagnosis is crucial to mitigate associated risks. Unfortunately, missed rib fractures are common, leading to heightened morbidity and mortality rates. While more sensitive imaging modalities exist, their practicality is limited due to cost and radiation exposure. Point of care ultrasound offers an alternative but has drawbacks in terms of procedural time and operator expertise. Therefore, this study aims to explore the potential of deep convolutional neural networks (DCNNs) in identifying rib fractures on chest radiographs. Methods We assembled a comprehensive retrospective dataset of chest radiographs with formal image reports documenting rib fractures from a single medical center over the last five years. The DCNN models were trained using 2000 region-of-interest (ROI) slices for each category, which included fractured ribs, non-fractured ribs, and background regions. To optimize training of the deep learning models (DLMs), the images were segmented into pixel dimensions of 128 × 128. Results The trained DCNN models demonstrated remarkable validation accuracies. Specifically, AlexNet achieved 92.6%, GoogLeNet achieved 92.2%, EfficientNetb3 achieved 92.3%, DenseNet201 achieved 92.4%, and MobileNetV2 achieved 91.2%. Discussion By integrating DCNN models capable of rib fracture recognition into clinical decision support systems, the incidence of missed rib fracture diagnoses can be significantly reduced, resulting in tangible decreases in morbidity and mortality rates among trauma patients. This innovative approach holds the potential to revolutionize the diagnosis and treatment of chest trauma, ultimately leading to improved clinical outcomes for individuals affected by these injuries. The utilization of DCNNs in rib fracture detection on chest radiographs addresses the limitations of other imaging modalities, offering a promising and practical solution to improve patient care and management.
Collapse
Affiliation(s)
- Shu-Tien Huang
- Department of Emergency Medicine, Mackay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, Mackay Medical College, New Taipei City, Taiwan
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
| | - Liong-Rung Liu
- Department of Emergency Medicine, Mackay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, Mackay Medical College, New Taipei City, Taiwan
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
| | - Hung-Wen Chiu
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- Big Data Research Center, College of Management, Taipei Medical University, Taipei, Taiwan
| | - Ming-Yuan Huang
- Department of Emergency Medicine, Mackay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, Mackay Medical College, New Taipei City, Taiwan
| | - Ming-Feng Tsai
- Department of Medicine, Mackay Medical College, New Taipei City, Taiwan
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- Division of Plastic Surgery, Department of Surgery, Mackay Memorial Hospital, Taipei, Taiwan
| |
Collapse
|
6
|
Ishtiaq U, Abdullah ERMF, Ishtiaque Z. A Hybrid Technique for Diabetic Retinopathy Detection Based on Ensemble-Optimized CNN and Texture Features. Diagnostics (Basel) 2023; 13:diagnostics13101816. [PMID: 37238304 DOI: 10.3390/diagnostics13101816] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
One of the most prevalent chronic conditions that can result in permanent vision loss is diabetic retinopathy (DR). Diabetic retinopathy occurs in five stages: no DR, and mild, moderate, severe, and proliferative DR. The early detection of DR is essential for preventing vision loss in diabetic patients. In this paper, we propose a method for the detection and classification of DR stages to determine whether patients are in any of the non-proliferative stages or in the proliferative stage. The hybrid approach based on image preprocessing and ensemble features is the foundation of the proposed classification method. We created a convolutional neural network (CNN) model from scratch for this study. Combining Local Binary Patterns (LBP) and deep learning features resulted in the creation of the ensemble features vector, which was then optimized using the Binary Dragonfly Algorithm (BDA) and the Sine Cosine Algorithm (SCA). Moreover, this optimized feature vector was fed to the machine learning classifiers. The SVM classifier achieved the highest classification accuracy of 98.85% on a publicly available dataset, i.e., Kaggle EyePACS. Rigorous testing and comparisons with state-of-the-art approaches in the literature indicate the effectiveness of the proposed methodology.
Collapse
Affiliation(s)
- Uzair Ishtiaq
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
- Department of Computer Science, COMSATS University Islamabad, Vehari Campus, Vehari 61100, Pakistan
| | - Erma Rahayu Mohd Faizal Abdullah
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
| | - Zubair Ishtiaque
- Department of Analytical, Biopharmaceutical and Medical Sciences, Atlantic Technological University, H91 T8NW Galway, Ireland
| |
Collapse
|
7
|
Detecting red-lesions from retinal fundus images using unique morphological features. Sci Rep 2023; 13:3487. [PMID: 36859429 PMCID: PMC9977778 DOI: 10.1038/s41598-023-30459-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 02/23/2023] [Indexed: 03/03/2023] Open
Abstract
One of the most important retinal diseases is Diabetic Retinopathy (DR) which can lead to serious damage to vision if remains untreated. Red-lesions are from important demonstrations of DR helping its identification in early stages. The detection and verification of them is helpful in the evaluation of disease severity and progression. In this paper, a novel image processing method is proposed for extracting red-lesions from fundus images. The method works based on finding and extracting the unique morphological features of red-lesions. After quality improvement of images, a pixel-based verification is performed in the proposed method to find the ones which provide a significant intensity change in a curve-like neighborhood. In order to do so, a curve is considered around each pixel and the intensity changes around the curve boundary are considered. The pixels for which it is possible to find such curves in at least two directions are considered as parts of red-lesions. The simplicity of computations, the high accuracy of results, and no need to post-processing operations are the important characteristics of the proposed method endorsing its good performance.
Collapse
|
8
|
Pan Y, Liu J, Cai Y, Yang X, Zhang Z, Long H, Zhao K, Yu X, Zeng C, Duan J, Xiao P, Li J, Cai F, Yang X, Tan Z. Fundus image classification using Inception V3 and ResNet-50 for the early diagnostics of fundus diseases. Front Physiol 2023; 14:1126780. [PMID: 36875027 PMCID: PMC9975334 DOI: 10.3389/fphys.2023.1126780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 01/27/2023] [Indexed: 02/17/2023] Open
Abstract
Purpose: We aim to present effective and computer aided diagnostics in the field of ophthalmology and improve eye health. This study aims to create an automated deep learning based system for categorizing fundus images into three classes: normal, macular degeneration and tessellated fundus for the timely recognition and treatment of diabetic retinopathy and other diseases. Methods: A total of 1,032 fundus images were collected from 516 patients using fundus camera from Health Management Center, Shenzhen University General Hospital Shenzhen University, Shenzhen 518055, Guangdong, China. Then, Inception V3 and ResNet-50 deep learning models are used to classify fundus images into three classes, Normal, Macular degeneration and tessellated fundus for the timely recognition and treatment of fundus diseases. Results: The experimental results show that the effect of model recognition is the best when the Adam is used as optimizer method, the number of iterations is 150, and 0.00 as the learning rate. According to our proposed approach we, achieved the highest accuracy of 93.81% and 91.76% by using ResNet-50 and Inception V3 after fine-tuned and adjusted hyper parameters according to our classification problem. Conclusion: Our research provides a reference to the clinical diagnosis or screening for diabetic retinopathy and other eye diseases. Our suggested computer aided diagnostics framework will prevent incorrect diagnoses caused by the low image quality and individual experience, and other factors. In future implementations, the ophthalmologists can implement more advanced learning algorithms to improve the accuracy of diagnosis.
Collapse
Affiliation(s)
- Yuhang Pan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Junru Liu
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Yuting Cai
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Xuemei Yang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Zhucheng Zhang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Hong Long
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Ketong Zhao
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Xia Yu
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Cui Zeng
- General Practice Alliance, Shenzhen, Guangdong, China.,University Town East Community Health Service Center, Shenzhen, Guangdong, China
| | - Jueni Duan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Ping Xiao
- Department of Otorhinolaryngology Head and Neck Surgery, Shenzhen Children's Hospital, Shenzhen, Guangdong, China
| | - Jingbo Li
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Feiyue Cai
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China.,General Practice Alliance, Shenzhen, Guangdong, China
| | - Xiaoyun Yang
- Ophthalmology Department, Shenzhen OCT Hospital, Shenzhen, Guangdong, China
| | - Zhen Tan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China.,General Practice Alliance, Shenzhen, Guangdong, China
| |
Collapse
|
9
|
CLC-Net: Contextual and Local Collaborative Network for Lesion Segmentation in Diabetic Retinopathy Images. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
10
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
11
|
Comparing Conventional and Deep Feature Models for Classifying Fundus Photography of Hemorrhages. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:7387174. [DOI: 10.1155/2022/7387174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 03/27/2022] [Accepted: 04/08/2022] [Indexed: 11/20/2022]
Abstract
Diabetic retinopathy is an eye-related pathology creating abnormalities and causing visual impairment, proper treatment of which requires identifying irregularities. This research uses a hemorrhage detection method and compares the classification of conventional and deep features. Especially, the method identifies hemorrhage connected with blood vessels or residing at the retinal border and was reported challenging. Initially, adaptive brightness adjustment and contrast enhancement rectify degraded images. Prospective locations of hemorrhages are estimated by a Gaussian matched filter, entropy thresholding, and morphological operation. Hemorrhages are segmented by a novel technique based on the regional variance of intensities. Features are then extracted by conventional methods and deep models for training support vector machines and the results are evaluated. Evaluation metrics for each model are promising, but findings suggest that comparatively, deep models are more effective than conventional features.
Collapse
|
12
|
Srinivasan V, Strodthoff N, Ma J, Binder A, Müller KR, Samek W. To pretrain or not? A systematic analysis of the benefits of pretraining in diabetic retinopathy. PLoS One 2022; 17:e0274291. [PMID: 36256665 PMCID: PMC9578637 DOI: 10.1371/journal.pone.0274291] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 08/26/2022] [Indexed: 11/06/2022] Open
Abstract
There is an increasing number of medical use cases where classification algorithms based on deep neural networks reach performance levels that are competitive with human medical experts. To alleviate the challenges of small dataset sizes, these systems often rely on pretraining. In this work, we aim to assess the broader implications of these approaches in order to better understand what type of pretraining works reliably (with respect to performance, robustness, learned representation etc.) in practice and what type of pretraining dataset is best suited to achieve good performance in small target dataset size scenarios. Considering diabetic retinopathy grading as an exemplary use case, we compare the impact of different training procedures including recently established self-supervised pretraining methods based on contrastive learning. To this end, we investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions. Our results indicate that models initialized from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions. In particular, self-supervised models show further benefits to supervised models. Self-supervised models with initialization from ImageNet pretraining not only report higher performance, they also reduce overfitting to large lesions along with improvements in taking into account minute lesions indicative of the progression of the disease. Understanding the effects of pretraining in a broader sense that goes beyond simple performance comparisons is of crucial importance for the broader medical imaging community beyond the use case considered in this work.
Collapse
Affiliation(s)
- Vignesh Srinivasan
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Nils Strodthoff
- School of Medicine and Health Services, Oldenburg University, Oldenburg, Germany
| | - Jackie Ma
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Alexander Binder
- Singapore Institute of Technology, ICT Cluster, Singapore, Singapore
- Department of Informatics, Oslo University, Oslo, Norway
| | - Klaus-Robert Müller
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
- Max Planck Institute for Informatics, Saarbrücken, Germany
- * E-mail: (KRM); (WS)
| | - Wojciech Samek
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- * E-mail: (KRM); (WS)
| |
Collapse
|
13
|
Ferro Desideri L, Rutigliani C, Corazza P, Nastasi A, Roda M, Nicolo M, Traverso CE, Vagge A. The upcoming role of Artificial Intelligence (AI) for retinal and glaucomatous diseases. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S50-S57. [PMID: 36216736 PMCID: PMC9732476 DOI: 10.1016/j.optom.2022.08.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/14/2022] [Accepted: 08/16/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the role of artificial intelligence (AI) and deep learning (DL) models is attracting increasing global interest in the field of ophthalmology. DL models are considered the current state-of-art among the AI technologies. In fact, DL systems have the capability to recognize, quantify and describe pathological clinical features. Their role is currently being investigated for the early diagnosis and management of several retinal diseases and glaucoma. The application of DL models to fundus photographs, visual fields and optical coherence tomography (OCT) imaging has provided promising results in the early detection of diabetic retinopathy (DR), wet age-related macular degeneration (w-AMD), retinopathy of prematurity (ROP) and glaucoma. In this review we analyze the current evidence of AI applied to these ocular diseases, as well as discuss the possible future developments and potential clinical implications, without neglecting the present limitations and challenges in order to adopt AI and DL models as powerful tools in the everyday routine clinical practice.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy.
| | | | - Paolo Corazza
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | | | - Matilde Roda
- Ophthalmology Unit, Department of Experimental, Diagnostic and Specialty Medicine (DIMES), Alma Mater Studiorum University of Bologna and S.Orsola-Malpighi Teaching Hospital, Bologna, Italy
| | - Massimo Nicolo
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Carlo Enrico Traverso
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Aldo Vagge
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| |
Collapse
|
14
|
Sun K, He M, He Z, Liu H, Pi X. EfficientNet embedded with spatial attention for recognition of multi-label fundus disease from color fundus photographs. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
15
|
AI-Based Automatic Detection and Classification of Diabetic Retinopathy Using U-Net and Deep Learning. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071427] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Artificial intelligence is widely applied to automate Diabetic retinopathy diagnosis. Diabetes-related retinal vascular disease is one of the world’s most common leading causes of blindness and vision impairment. Therefore, automated DR detection systems would greatly benefit the early screening and treatment of DR and prevent vision loss caused by it. Researchers have proposed several systems to detect abnormalities in retinal images in the past few years. However, Diabetic Retinopathy automatic detection methods have traditionally been based on hand-crafted feature extraction from the retinal images and using a classifier to obtain the final classification. DNN (Deep neural networks) have made several changes in the previous few years to assist overcome the problem mentioned above. We suggested a two-stage novel approach for automated DR classification in this research. Due to the low fraction of positive instances in the asymmetric Optic Disk (OD) and blood vessels (BV) detection system, preprocessing and data augmentation techniques are used to enhance the image quality and quantity. The first step uses two independent U-Net models for OD (optic disc) and BV (blood vessel) segmentation. In the second stage, the symmetric hybrid CNN-SVD model was created after preprocessing to extract and choose the most discriminant features following OD and BV extraction using Inception-V3 based on transfer learning, and detects DR by recognizing retinal biomarkers such as MA (microaneurysms), HM (hemorrhages), and exudates (EX). On EyePACS-1, Messidor-2, and DIARETDB0, the proposed methodology demonstrated state-of-the-art performance, with an average accuracy of 97.92%, 94.59%, and 93.52%, respectively. Extensive testing and comparisons with baseline approaches indicate the efficacy of the suggested methodology.
Collapse
|
16
|
Miao J, Yu J, Zou W, Su N, Peng Z, Wu X, Huang J, Fang Y, Yuan S, Xie P, Huang K, Chen Q, Hu Z, Liu Q. Deep Learning Models for Segmenting Non-perfusion Area of Color Fundus Photographs in Patients With Branch Retinal Vein Occlusion. Front Med (Lausanne) 2022; 9:794045. [PMID: 35847781 PMCID: PMC9279621 DOI: 10.3389/fmed.2022.794045] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 05/30/2022] [Indexed: 11/17/2022] Open
Abstract
Purpose To develop artificial intelligence (AI)-based deep learning (DL) models for automatically detecting the ischemia type and the non-perfusion area (NPA) from color fundus photographs (CFPs) of patients with branch retinal vein occlusion (BRVO). Methods This was a retrospective analysis of 274 CFPs from patients diagnosed with BRVO. All DL models were trained using a deep convolutional neural network (CNN) based on 45 degree CFPs covering the fovea and the optic disk. We first trained a DL algorithm to identify BRVO patients with or without the necessity of retinal photocoagulation from 219 CFPs and validated the algorithm on 55 CFPs. Next, we trained another DL algorithm to segment NPA from 104 CFPs and validated it on 29 CFPs, in which the NPA was manually delineated by 3 experienced ophthalmologists according to fundus fluorescein angiography. Both DL models have been cross-validated 5-fold. The recall, precision, accuracy, and area under the curve (AUC) were used to evaluate the DL models in comparison with three types of independent ophthalmologists of different seniority. Results In the first DL model, the recall, precision, accuracy, and area under the curve (AUC) were 0.75 ± 0.08, 0.80 ± 0.07, 0.79 ± 0.02, and 0.82 ± 0.03, respectively, for predicting the necessity of laser photocoagulation for BRVO CFPs. The second DL model was able to segment NPA in CFPs of BRVO with an AUC of 0.96 ± 0.02. The recall, precision, and accuracy for segmenting NPA was 0.74 ± 0.05, 0.87 ± 0.02, and 0.89 ± 0.02, respectively. The performance of the second DL model was nearly comparable with the senior doctors and significantly better than the residents. Conclusion These results indicate that the DL models can directly identify and segment retinal NPA from the CFPs of patients with BRVO, which can further guide laser photocoagulation. Further research is needed to identify NPA of the peripheral retina in BRVO, or other diseases, such as diabetic retinopathy.
Collapse
Affiliation(s)
- Jinxin Miao
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Jiale Yu
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Wenjun Zou
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- Department of Ophthalmology, The Affiliated Wuxi No.2 People's Hospital of Nanjing Medical University, Wuxi, China
| | - Na Su
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Zongyi Peng
- The First School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Xinjing Wu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Junlong Huang
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yuan Fang
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Ping Xie
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Kun Huang
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Zizhong Hu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- *Correspondence: Qinghuai Liu
| | - Qinghuai Liu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- Zizhong Hu
| |
Collapse
|
17
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
18
|
OLTU B, KARACA BK, ERDEM H, ÖZGÜR A. A systematic review of transfer learning-based approaches for diabetic retinopathy detection. GAZI UNIVERSITY JOURNAL OF SCIENCE 2022. [DOI: 10.35378/gujs.1081546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cases of diabetes and related diabetic retinopathy (DR) have been increasing at an alarming rate in modern times. Early detection of DR is an important problem since it may cause permanent blindness in the late stages. In the last two decades, many different approaches have been applied in DR detection. Reviewing academic literature shows that deep neural networks (DNNs) have become the most preferred approach for DR detection. Among these DNN approaches, Convolutional Neural Network (CNN) models are the most used ones in the field of medical image classification. Designing a new CNN architecture is a tedious and time-consuming approach. Additionally, training an enormous number of parameters is also a difficult task. Due to this reason, instead of training CNNs from scratch, using pre-trained models has been suggested in recent years as transfer learning approach. Accordingly, the present study as a review focuses on DNN and Transfer Learning based applications of DR detection considering 43 publications between 2015 and 2021. The published papers are summarized using 3 figures and 10 tables, giving information about 29 pre-trained CNN models, 13 DR data sets and standard performance metrics.
Collapse
Affiliation(s)
- Burcu OLTU
- BAŞKENT ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ
| | | | | | | |
Collapse
|
19
|
Andersen JKH, Hubel MS, Rasmussen ML, Grauslund J, Savarimuthu TR. Automatic Detection of Abnormalities and Grading of Diabetic Retinopathy in 6-Field Retinal Images: Integration of Segmentation Into Classification. Transl Vis Sci Technol 2022; 11:19. [PMID: 35731541 PMCID: PMC9233290 DOI: 10.1167/tvst.11.6.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Classification of diabetic retinopathy (DR) is traditionally based on severity grading, given by the most advanced lesion, but potentially leaving out relevant information for risk stratification. In this study, we aimed to develop a deep learning model able to individually segment seven different DR-lesions, in order to test if this would improve a subsequently developed classification model. Methods First, manual segmentation of 34,075 different DR-lesions was used to construct a segmentation model, with performance subsequently compared to another retinal specialist. Second, we constructed a 5-step classification model using a data set of 31,325 expert-annotated retinal 6-field images and evaluated if performance was improved with the integration of presegmentation given by the segmentation model. Results The segmentation model had higher average sensitivity across all abnormalities compared to the retinal expert (0.68 and 0.62) at a comparable average F1-score (0.60 and 0.62). Model sensitivity for microaneurysms, retinal hemorrhages and intraretinal microvascular abnormalities was higher by 42.5%, 8.8%, and 67.5% and F1-scores by 15.8%, 6.5%, and 12.5%, respectively. When presegmentation was included, grading performance increased by 29.7%, 6.0%, and 4.5% for average per class accuracy, quadratic weighted kappa, and multiclass macro area under the curve, with values of 70.4%, 0.90, and 0.92, respectively. Conclusions The segmentation model matched an expert in detecting retinal abnormalities, and presegmentation substantially improved accuracy of the automated classification model. Translational Relevance Presegmentation may yield more accurate automated DR grading models and increase interpretability and trust in model decisions.
Collapse
Affiliation(s)
- Jakob K H Andersen
- The Maersk Mc-Kinney Moeller Institute, SDU Robotics, University of Southern Denmark, Odense, Denmark.,Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| | - Martin S Hubel
- The Maersk Mc-Kinney Moeller Institute, SDU Robotics, University of Southern Denmark, Odense, Denmark
| | - Malin L Rasmussen
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark.,Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| | - Thiusius R Savarimuthu
- The Maersk Mc-Kinney Moeller Institute, SDU Robotics, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
20
|
Juneja D, Gupta A, Singh O. Artificial intelligence in critically ill diabetic patients: current status and future prospects. Artif Intell Gastroenterol 2022; 3:66-79. [DOI: 10.35712/aig.v3.i2.66] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 04/21/2022] [Accepted: 04/28/2022] [Indexed: 02/06/2023] Open
|
21
|
Wang TY, Chen YH, Chen JT, Liu JT, Wu PY, Chang SY, Lee YW, Su KC, Chen CL. Diabetic Macular Edema Detection Using End-to-End Deep Fusion Model and Anatomical Landmark Visualization on an Edge Computing Device. Front Med (Lausanne) 2022; 9:851644. [PMID: 35445051 PMCID: PMC9014123 DOI: 10.3389/fmed.2022.851644] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/14/2022] [Indexed: 11/23/2022] Open
Abstract
Purpose Diabetic macular edema (DME) is a common cause of vision impairment and blindness in patients with diabetes. However, vision loss can be prevented by regular eye examinations during primary care. This study aimed to design an artificial intelligence (AI) system to facilitate ophthalmology referrals by physicians. Methods We developed an end-to-end deep fusion model for DME classification and hard exudate (HE) detection. Based on the architecture of fusion model, we also applied a dual model which included an independent classifier and object detector to perform these two tasks separately. We used 35,001 annotated fundus images from three hospitals between 2007 and 2018 in Taiwan to create a private dataset. The Private dataset, Messidor-1 and Messidor-2 were used to assess the performance of the fusion model for DME classification and HE detection. A second object detector was trained to identify anatomical landmarks (optic disc and macula). We integrated the fusion model and the anatomical landmark detector, and evaluated their performance on an edge device, a device with limited compute resources. Results For DME classification of our private testing dataset, Messidor-1 and Messidor-2, the area under the receiver operating characteristic curve (AUC) for the fusion model had values of 98.1, 95.2, and 95.8%, the sensitivities were 96.4, 88.7, and 87.4%, the specificities were 90.1, 90.2, and 90.2%, and the accuracies were 90.8, 90.0, and 89.9%, respectively. In addition, the AUC was not significantly different for the fusion and dual models for the three datasets (p = 0.743, 0.942, and 0.114, respectively). For HE detection, the fusion model achieved a sensitivity of 79.5%, a specificity of 87.7%, and an accuracy of 86.3% using our private testing dataset. The sensitivity of the fusion model was higher than that of the dual model (p = 0.048). For optic disc and macula detection, the second object detector achieved accuracies of 98.4% (optic disc) and 99.3% (macula). The fusion model and the anatomical landmark detector can be deployed on a portable edge device. Conclusion This portable AI system exhibited excellent performance for the classification of DME, and the visualization of HE and anatomical locations. It facilitates interpretability and can serve as a clinical reference for physicians. Clinically, this system could be applied to diabetic eye screening to improve the interpretation of fundus imaging in patients with DME.
Collapse
Affiliation(s)
- Ting-Yuan Wang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yi-Hao Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jiann-Torng Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jung-Tzu Liu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Po-Yi Wu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Sung-Yen Chang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ya-Wen Lee
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Kuo-Chen Su
- Department of Optometry, Chung Shan Medical University, Taichung, Taiwan
| | - Ching-Long Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
22
|
Das D, Biswas SK, Bandyopadhyay S. A critical review on diagnosis of diabetic retinopathy using machine learning and deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25613-25655. [PMID: 35342328 PMCID: PMC8940593 DOI: 10.1007/s11042-022-12642-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 06/29/2021] [Accepted: 02/09/2022] [Indexed: 06/12/2023]
Abstract
Diabetic Retinopathy (DR) is a health condition caused due to Diabetes Mellitus (DM). It causes vision problems and blindness due to disfigurement of human retina. According to statistics, 80% of diabetes patients battling from long diabetic period of 15 to 20 years, suffer from DR. Hence, it has become a dangerous threat to the health and life of people. To overcome DR, manual diagnosis of the disease is feasible but overwhelming and cumbersome at the same time and hence requires a revolutionary method. Thus, such a health condition necessitates primary recognition and diagnosis to prevent DR from developing into severe stages and prevent blindness. Innumerable Machine Learning (ML) models are proposed by researchers across the globe, to achieve this purpose. Various feature extraction techniques are proposed for extraction of DR features for early detection. However, traditional ML models have shown either meagre generalization throughout feature extraction and classification for deploying smaller datasets or consumes more of training time causing inefficiency in prediction while using larger datasets. Hence Deep Learning (DL), a new domain of ML, is introduced. DL models can handle a smaller dataset with help of efficient data processing techniques. However, they generally incorporate larger datasets for their deep architectures to enhance performance in feature extraction and image classification. This paper gives a detailed review on DR, its features, causes, ML models, state-of-the-art DL models, challenges, comparisons and future directions, for early detection of DR.
Collapse
Affiliation(s)
- Dolly Das
- National Institute of Technology Silchar, Cachar, Assam India
| | | | | |
Collapse
|
23
|
Joint DR-DME classification using deep learning-CNN based modified grey-wolf optimizer with variable weights. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103439] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
24
|
|
25
|
Detection of Diabetic Retinopathy (DR) Severity from Fundus Photographs: An Ensemble Approach Using Weighted Average. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-021-06381-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
26
|
Huang X, Wang H, She C, Feng J, Liu X, Hu X, Chen L, Tao Y. Artificial intelligence promotes the diagnosis and screening of diabetic retinopathy. Front Endocrinol (Lausanne) 2022; 13:946915. [PMID: 36246896 PMCID: PMC9559815 DOI: 10.3389/fendo.2022.946915] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Deep learning evolves into a new form of machine learning technology that is classified under artificial intelligence (AI), which has substantial potential for large-scale healthcare screening and may allow the determination of the most appropriate specific treatment for individual patients. Recent developments in diagnostic technologies facilitated studies on retinal conditions and ocular disease in metabolism and endocrinology. Globally, diabetic retinopathy (DR) is regarded as a major cause of vision loss. Deep learning systems are effective and accurate in the detection of DR from digital fundus photographs or optical coherence tomography. Thus, using AI techniques, systems with high accuracy and efficiency can be developed for diagnosing and screening DR at an early stage and without the resources that are only accessible in special clinics. Deep learning enables early diagnosis with high specificity and sensitivity, which makes decisions based on minimally handcrafted features paving the way for personalized DR progression real-time monitoring and in-time ophthalmic or endocrine therapies. This review will discuss cutting-edge AI algorithms, the automated detecting systems of DR stage grading and feature segmentation, the prediction of DR outcomes and therapeutics, and the ophthalmic indications of other systemic diseases revealed by AI.
Collapse
Affiliation(s)
- Xuan Huang
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- Medical Research Center, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Hui Wang
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Chongyang She
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jing Feng
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xuhui Liu
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xiaofeng Hu
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Li Chen
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Yong Tao
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- *Correspondence: Yong Tao,
| |
Collapse
|
27
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
28
|
Chen PN, Lee CC, Liang CM, Pao SI, Huang KH, Lin KF. General deep learning model for detecting diabetic retinopathy. BMC Bioinformatics 2021; 22:84. [PMID: 34749634 PMCID: PMC8576963 DOI: 10.1186/s12859-021-04005-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 02/08/2021] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND Doctors can detect symptoms of diabetic retinopathy (DR) early by using retinal ophthalmoscopy, and they can improve diagnostic efficiency with the assistance of deep learning to select treatments and support personnel workflow. Conventionally, most deep learning methods for DR diagnosis categorize retinal ophthalmoscopy images into training and validation data sets according to the 80/20 rule, and they use the synthetic minority oversampling technique (SMOTE) in data processing (e.g., rotating, scaling, and translating training images) to increase the number of training samples. Oversampling training may lead to overfitting of the training model. Therefore, untrained or unverified images can yield erroneous predictions. Although the accuracy of prediction results is 90%-99%, this overfitting of training data may distort training module variables. RESULTS This study uses a 2-stage training method to solve the overfitting problem. In the training phase, to build the model, the Learning module 1 used to identify the DR and no-DR. The Learning module 2 on SMOTE synthetic datasets to identify the mild-NPDR, moderate NPDR, severe NPDR and proliferative DR classification. These two modules also used early stopping and data dividing methods to reduce overfitting by oversampling. In the test phase, we use the DIARETDB0, DIARETDB1, eOphtha, MESSIDOR, and DRIVE datasets to evaluate the performance of the training network. The prediction accuracy achieved to 85.38%, 84.27%, 85.75%, 86.73%, and 92.5%. CONCLUSIONS Based on the experiment, a general deep learning model for detecting DR was developed, and it could be used with all DR databases. We provided a simple method of addressing the imbalance of DR databases, and this method can be used with other medical images.
Collapse
Affiliation(s)
- Ping-Nan Chen
- Department of Biomedical Engineering, National Defense Medical Center, Taipei, 114, Taiwan, ROC.
| | - Chia-Chiang Lee
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, 106, Taiwan, ROC
| | - Chang-Min Liang
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, 114, Taiwan, ROC
| | - Shu-I Pao
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, 114, Taiwan, ROC
| | - Ke-Hao Huang
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, 114, Taiwan, ROC
| | - Ke-Feng Lin
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, 106, Taiwan, ROC.,Department of Medical Records, Tri-Service General Hospital, National Defense Medical Center, Taipei, 114, Taiwan, ROC
| |
Collapse
|
29
|
Yasin S, Iqbal N, Ali T, Draz U, Alqahtani A, Irfan M, Rehman A, Glowacz A, Alqhtani S, Proniewska K, Brumercik F, Wzorek L. Severity Grading and Early Retinopathy Lesion Detection through Hybrid Inception-ResNet Architecture. SENSORS 2021; 21:s21206933. [PMID: 34696146 PMCID: PMC8537739 DOI: 10.3390/s21206933] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 10/06/2021] [Accepted: 10/10/2021] [Indexed: 12/14/2022]
Abstract
Diabetic retinopathy (DR) is a diabetes disorder that disturbs human vision. It starts due to the damage in the light-sensitive tissues of blood vessels at the retina. In the beginning, DR may show no symptoms or only slight vision issues, but in the long run, it could be a permanent source of impaired vision, simply known as blindness in the advanced as well as in developing nations. This could be prevented if DR is identified early enough, but it can be challenging as we know the disease frequently shows rare signs until it is too late to deliver an effective cure. In our work, we recommend a framework for severity grading and early DR detection through hybrid deep learning Inception-ResNet architecture with smart data preprocessing. Our proposed method is composed of three steps. Firstly, the retinal images are preprocessed with the help of augmentation and intensity normalization. Secondly, the preprocessed images are given to the hybrid Inception-ResNet architecture to extract the vector image features for the categorization of different stages. Lastly, to identify DR and decide its stage (e.g., mild DR, moderate DR, severe DR, or proliferative DR), a classification step is used. The studies and trials have to reveal suitable outcomes when equated with some other previously deployed approaches. However, there are specific constraints in our study that are also discussed and we suggest methods to enhance further research in this field.
Collapse
Affiliation(s)
- Sana Yasin
- Faculty of Computing, University of Okara, Okara 56141, Pakistan; (S.Y.); (N.I.)
| | - Nasrullah Iqbal
- Faculty of Computing, University of Okara, Okara 56141, Pakistan; (S.Y.); (N.I.)
| | - Tariq Ali
- Department of Computer Science, COMSATS University Islamabad (CUI), Sahiwal Campus, Sahiwal 57000, Pakistan;
| | - Umar Draz
- Department of Computer Science, University of Sahiwal, Sahiwal 57000, Pakistan
- Computer Science Department, CUI, Lahore Campus, Lahore 54000, Pakistan
- Correspondence:
| | - Ali Alqahtani
- College of Computer Science and Information Systems, Najran University, Najran 11001, Saudi Arabia; (A.A.); (S.A.)
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University Saudi Arabia, Najran 61441, Saudi Arabia;
| | - Abdul Rehman
- IT Department, Superior University, Lahore 120000, Pakistan;
| | - Adam Glowacz
- Department of Automatic Control and Robotics, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. A. Mickiewicza 30, 30-059 Krakow, Poland;
| | - Samar Alqhtani
- Electrical Engineering Department, College of Engineering, Najran University Saudi Arabia, Najran 61441, Saudi Arabia;
| | - Klaudia Proniewska
- Department of Bioinformatics and Telemedicine, Jagiellonian University Medical College, Anny 12, 31-008 Krakow, Poland;
| | - Frantisek Brumercik
- Department of Design and Machine Elements, Faculty of Mechanical Engineering, University of Zilina, Univerzitna 1, 010 26 Zilina, Slovakia;
| | - Lukasz Wzorek
- Wzorek. Systems, ul. Kapelanka 10/18, 30-347 Krakow, Poland;
| |
Collapse
|
30
|
Martinez-Murcia FJ, Ortiz A, Ramírez J, Górriz JM, Cruz R. Deep residual transfer learning for automatic diagnosis and grading of diabetic retinopathy. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.148] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
31
|
Lakshminarayanan V, Kheradfallah H, Sarkar A, Jothi Balaji J. Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey. J Imaging 2021; 7:165. [PMID: 34460801 PMCID: PMC8468161 DOI: 10.3390/jimaging7090165] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/23/2021] [Accepted: 08/24/2021] [Indexed: 12/16/2022] Open
Abstract
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fundus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016-2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using both the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) and Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114 published articles which conformed to the scope of the review. In addition, a list of 43 major datasets is presented.
Collapse
Affiliation(s)
- Vasudevan Lakshminarayanan
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Hoda Kheradfallah
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Arya Sarkar
- Department of Computer Engineering, University of Engineering and Management, Kolkata 700 156, India;
| | | |
Collapse
|
32
|
Kurilová V, Goga J, Oravec M, Pavlovičová J, Kajan S. Support vector machine and deep-learning object detection for localisation of hard exudates. Sci Rep 2021; 11:16045. [PMID: 34362989 PMCID: PMC8346563 DOI: 10.1038/s41598-021-95519-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 07/26/2021] [Indexed: 02/08/2023] Open
Abstract
Hard exudates are one of the main clinical findings in the retinal images of patients with diabetic retinopathy. Detecting them early significantly impacts the treatment of underlying diseases; therefore, there is a need for automated systems with high reliability. We propose a novel method for identifying and localising hard exudates in retinal images. To achieve fast image pre-scanning, a support vector machine (SVM) classifier was combined with a faster region-based convolutional neural network (faster R-CNN) object detector for the localisation of exudates. Rapid pre-scanning filtered out exudate-free samples using a feature vector extracted from the pre-trained ResNet-50 network. Subsequently, the remaining samples were processed using a faster R-CNN detector for detailed analysis. When evaluating all the exudates as individual objects, the SVM classifier reduced the false positive rate by 29.7% and marginally increased the false negative rate by 16.2%. When evaluating all the images, we recorded a 50% reduction in the false positive rate, without any decrease in the number of false negatives. The interim results suggested that pre-scanning the samples using the SVM prior to implementing the deep-network object detector could simultaneously improve and speed up the current hard exudates detection method, especially when there is paucity of training data.
Collapse
Affiliation(s)
- Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia.
| | - Jozef Goga
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| | - Miloš Oravec
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia.
| | - Jarmila Pavlovičová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| | - Slavomír Kajan
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| |
Collapse
|
33
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
34
|
Cheng Y, Ma M, Li X, Zhou Y. Multi-label classification of fundus images based on graph convolutional network. BMC Med Inform Decis Mak 2021; 21:82. [PMID: 34330270 PMCID: PMC8323219 DOI: 10.1186/s12911-021-01424-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 02/08/2021] [Indexed: 11/16/2022] Open
Abstract
Background Diabetic Retinopathy (DR) is the most common and serious microvascular complication in the diabetic population. Using computer-aided diagnosis from the fundus images has become a method of detecting retinal diseases, but the detection of multiple lesions is still a difficult point in current research. Methods This study proposed a multi-label classification method based on the graph convolutional network (GCN), so as to detect 8 types of fundus lesions in color fundus images. We collected 7459 fundus images (1887 left eyes, 1966 right eyes) from 2282 patients (1283 women, 999 men), and labeled 8 types of lesions, laser scars, drusen, cup disc ratio (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$C/D>0.6$$\end{document}C/D>0.6), hemorrhages, retinal arteriosclerosis, microaneurysms, hard exudates and soft exudates. We constructed a specialized corpus of the related fundus lesions. A multi-label classification algorithm for fundus images was proposed based on the corpus, and the collected data were trained. Results The average overall F1 Score (OF1) and the average per-class F1 Score (CF1) of the model were 0.808 and 0.792 respectively. The area under the ROC curve (AUC) of our proposed model reached 0.986, 0.954, 0.946, 0.957, 0.952, 0.889, 0.937 and 0.926 for detecting laser scars, drusen, cup disc ratio, hemorrhages, retinal arteriosclerosis, microaneurysms, hard exudates and soft exudates, respectively. Conclusions Our results demonstrated that our proposed model can detect a variety of lesions in the color images of the fundus, which lays a foundation for assisting doctors in diagnosis and makes it possible to carry out rapid and efficient large-scale screening of fundus lesions.
Collapse
Affiliation(s)
- Yinlin Cheng
- School of Biomedical Engineering, Sun Yat-sen University, No. 132 Waihuan East Road, Guangzhou, 510006, China.,Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, No. 74 Zhongshan 2nd Road, Guangzhou, 510080, China
| | - Mengnan Ma
- School of Biomedical Engineering, Sun Yat-sen University, No. 132 Waihuan East Road, Guangzhou, 510006, China.,Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, No. 74 Zhongshan 2nd Road, Guangzhou, 510080, China
| | - Xingyu Li
- Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, No. 74 Zhongshan 2nd Road, Guangzhou, 510080, China.,Zhongshan School of Medicine, Sun Yat-sen University, No.74 Zhongshan 2nd Road, Guangzhou, 510080, China
| | - Yi Zhou
- Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, No. 74 Zhongshan 2nd Road, Guangzhou, 510080, China. .,Key Laboratory of Tropical Disease Control (Sun Yat-sen University), Ministry of Education, No. 74 Zhongshan 2nd Road, Guangzhou, 510080, China.
| |
Collapse
|
35
|
Deep learning for diabetic retinopathy detection and classification based on fundus images: A review. Comput Biol Med 2021; 135:104599. [PMID: 34247130 DOI: 10.1016/j.compbiomed.2021.104599] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/12/2021] [Accepted: 06/18/2021] [Indexed: 02/02/2023]
Abstract
Diabetic Retinopathy is a retina disease caused by diabetes mellitus and it is the leading cause of blindness globally. Early detection and treatment are necessary in order to delay or avoid vision deterioration and vision loss. To that end, many artificial-intelligence-powered methods have been proposed by the research community for the detection and classification of diabetic retinopathy on fundus retina images. This review article provides a thorough analysis of the use of deep learning methods at the various steps of the diabetic retinopathy detection pipeline based on fundus images. We discuss several aspects of that pipeline, ranging from the datasets that are widely used by the research community, the preprocessing techniques employed and how these accelerate and improve the models' performance, to the development of such deep learning models for the diagnosis and grading of the disease as well as the localization of the disease's lesions. We also discuss certain models that have been applied in real clinical settings. Finally, we conclude with some important insights and provide future research directions.
Collapse
|
36
|
Chen M, Jin K, You K, Xu Y, Wang Y, Yip CC, Wu J, Ye J. Automatic detection of leakage point in central serous chorioretinopathy of fundus fluorescein angiography based on time sequence deep learning. Graefes Arch Clin Exp Ophthalmol 2021; 259:2401-2411. [PMID: 33846835 DOI: 10.1007/s00417-021-05151-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 02/16/2021] [Accepted: 03/02/2021] [Indexed: 01/23/2023] Open
Abstract
PURPOSE To detect the leakage points of central serous chorioretinopathy (CSC) automatically from dynamic images of fundus fluorescein angiography (FFA) using a deep learning algorithm (DLA). METHODS The study included 2104 FFA images from 291 FFA sequences of 291 eyes (137 right eyes and 154 left eyes) from 262 patients. The leakage points were segmented with an attention gated network (AGN). The optic disk (OD) and macula region were segmented simultaneously using a U-net. To reduce the number of false positives based on time sequence, the leakage points were matched according to their positions in relation to the OD and macula. RESULTS With the AGN alone, the number of cases whose detection results perfectly matched the ground truth was only 37 out of 61 cases (60.7%) in the test set. The dice on the lesion level were 0.811. Using an elimination procedure to remove false positives, the number of accurate detection cases increased to 57 (93.4%). The dice on the lesion level also improved to 0.949. CONCLUSIONS Using DLA, the CSC leakage points in FFA can be identified reproducibly and accurately with a good match to the ground truth. This novel finding may pave the way for potential application of artificial intelligence to guide laser therapy.
Collapse
Affiliation(s)
- Menglu Chen
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China
| | - Kun You
- Hangzhou Truth Medical Technology Ltd, Hangzhou, 311215, China
| | - Yufeng Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China
| | - Yao Wang
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China
| | - Chee-Chew Yip
- Department of Ophthalmology, Khoo Teck Puat Hospital, Yishun Central, Singapore
| | - Jian Wu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China.
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China.
| |
Collapse
|
37
|
Soulami KB, Kaabouch N, Saidi MN, Tamtaoui A. Breast cancer: One-stage automated detection, segmentation, and classification of digital mammograms using UNet model based-semantic segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102481] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
38
|
Morya AK, Gowdar J, Kaushal A, Makwana N, Biswas S, Raj P, Singh S, Hegde S, Vaishnav R, Shetty S, S P V, Shah V, Paul S, Muralidhar S, Velis G, Padua W, Waghule T, Nazm N, Jeganathan S, Reddy Mallidi A, Susan John D, Sen S, Choudhary S, Parashar N, Sharma B, Raghav P, Udawat R, Ram S, Salodia UP. Evaluating the Viability of a Smartphone-Based Annotation Tool for Faster and Accurate Image Labelling for Artificial Intelligence in Diabetic Retinopathy. Clin Ophthalmol 2021; 15:1023-1039. [PMID: 33727785 PMCID: PMC7953891 DOI: 10.2147/opth.s289425] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 01/18/2021] [Indexed: 12/17/2022] Open
Abstract
INTRODUCTION Deep Learning (DL) and Artificial Intelligence (AI) have become widespread due to the advanced technologies and availability of digital data. Supervised learning algorithms have shown human-level performance or even better and are better feature extractor-quantifier than unsupervised learning algorithms. To get huge dataset with good quality control, there is a need of an annotation tool with a customizable feature set. This paper evaluates the viability of having an in house annotation tool which works on a smartphone and can be used in a healthcare setting. METHODS We developed a smartphone-based grading system to help researchers in grading multiple retinal fundi. The process consisted of designing the flow of user interface (UI) keeping in view feedback from experts. Quantitative and qualitative analysis of change in speed of a grader over time and feature usage statistics was done. The dataset size was approximately 16,000 images with adjudicated labels by a minimum of 2 doctors. Results for an AI model trained on the images graded using this tool and its validation over some public datasets were prepared. RESULTS We created a DL model and analysed its performance for a binary referrable DR Classification task, whether a retinal image has Referrable DR or not. A total of 32 doctors used the tool for minimum of 20 images each. Data analytics suggested significant portability and flexibility of the tool. Grader variability for images was in favour of agreement on images annotated. Number of images used to assess agreement is 550. Mean of 75.9% was seen in agreement. CONCLUSION Our aim was to make Annotation of Medical imaging easier and to minimize time taken for annotations without quality degradation. The user feedback and feature usage statistics confirm our hypotheses of incorporation of brightness and contrast variations, green channels and zooming add-ons in correlation to certain disease types. Simulation of multiple review cycles and establishing quality control can boost the accuracy of AI models even further. Although our study aims at developing an annotation tool for diagnosing and classifying diabetic retinopathy fundus images but same concept can be used for fundus images of other ocular diseases as well as other streams of medical science such as radiology where image-based diagnostic applications are utilised.
Collapse
Affiliation(s)
- Arvind Kumar Morya
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Jaitra Gowdar
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Abhishek Kaushal
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Nachiket Makwana
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Saurav Biswas
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Puneeth Raj
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Shabnam Singh
- Sri Narayani Hospital & Research Centre, Vellore, Tamilnadu, 632 055, India
| | - Sharat Hegde
- Prasad Netralaya, Udupi, Karnataka, 576101, India
| | - Raksha Vaishnav
- Bhaktivedanta Hospital, Mira Bhayandar, Maharashtra, 401107, India
| | - Sharan Shetty
- Prime Retina Eye Care Centre, Hyderabad, Telangana, 500029, India
| | | | - Vedang Shah
- Shree Netra Eye Foundation, Kolkata, West Bengal, 700020, India
| | | | | | | | - Winston Padua
- St. John's Medical College & Hospital, Bengaluru, Bengaluru, 560034, India
| | - Tushar Waghule
- Reti Vision Eye Clinic, KK Eye Institute, Pune, Maharashtra, 411001, India
| | - Nazneen Nazm
- ESI PGIMSR, ESI Medical College and Hospital, Kolkata, West Bengal, 700104, India
| | - Sangeetha Jeganathan
- Srinivas Institute of Medical Sciences and Research Centre, Mangalore, Karnataka, 574146, India
| | | | - Dona Susan John
- Diya Speciality Eye Care, Bengaluru, Karnataka, 560061, India
| | - Sagnik Sen
- Aravind Eye Hospital, Madurai, Tamil Nadu, 625 020, India
| | - Sandeep Choudhary
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Nishant Parashar
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Bhavana Sharma
- All India Institute of Medical Sciences, Bhopal, Madhya Pradesh, 462020, India
| | - Pankaja Raghav
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Raghuveer Udawat
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Sampat Ram
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Umang P Salodia
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| |
Collapse
|
39
|
Yousaf W, Umar A, Shirazi SH, Khan Z, Razzak I, Zaka M. Patch-CNN: Deep learning for logo detection and brand recognition. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-190660] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Automatic logo detection and recognition is significantly growing due to the increasing requirements of intelligent documents analysis and retrieval. The main problem to logo detection is intra-class variation, which is generated by the variation in image quality and degradation. The problem of misclassification also occurs while having tiny logo in large image with other objects. To address this problem, Patch-CNN is proposed for logo recognition which uses small patches of logos for training to solve the problem of misclassification. The classification is accomplished by dividing the logo images into small patches and threshold is applied to drop no logo area according to ground truth. The architectures of AlexNet and ResNet are also used for logo detection. We propose a segmentation free architecture for the logo detection and recognition. In literature, the concept of region proposal generation is used to solve logo detection, but these techniques suffer in case of tiny logos. Proposed CNN is especially designed for extracting the detailed features from logo patches. So far, the technique has attained accuracy equals to 0.9901 with acceptable training and testing loss on the dataset used in this work.
Collapse
Affiliation(s)
- Waqas Yousaf
- Department of Information Technology, Hazara University Mansehra, Pakistan
| | - Arif Umar
- Department of Information Technology, Hazara University Mansehra, Pakistan
| | - Syed Hamad Shirazi
- Department of Information Technology, Hazara University Mansehra, Pakistan
| | - Zakir Khan
- Department of Information Technology, Hazara University Mansehra, Pakistan
| | - Imran Razzak
- Department of Information Technology, University of Technology Sydney, Sydney, Australia
| | - Mubina Zaka
- Department of Information Technology, Hazara University Mansehra, Pakistan
| |
Collapse
|
40
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
41
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
42
|
El-Hag NA, Sedik A, El-Shafai W, El-Hoseny HM, Khalaf AAM, El-Fishawy AS, Al-Nuaimy W, Abd El-Samie FE, El-Banby GM. Classification of retinal images based on convolutional neural network. Microsc Res Tech 2020; 84:394-414. [PMID: 33350559 DOI: 10.1002/jemt.23596] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 08/11/2020] [Accepted: 08/30/2020] [Indexed: 02/05/2023]
Abstract
Automatic detection of maculopathy disease is a very important step to achieve high-accuracy results for the early discovery of the disease to help ophthalmologists to treat patients. Manual detection of diabetic maculopathy needs much effort and time from ophthalmologists. Detection of exudates from retinal images is applied for the maculopathy disease diagnosis. The first proposed framework in this paper for retinal image classification begins with fuzzy preprocessing in order to improve the original image to enhance the contrast between the objects and the background. After that, image segmentation is performed through binarization of the image to extract both blood vessels and the optic disc and then remove them from the original image. A gradient process is performed on the retinal image after this removal process for discrimination between normal and abnormal cases. Histogram of the gradients is estimated, and consequently the cumulative histogram of gradients is obtained and compared with a threshold cumulative histogram at certain bins. To determine the threshold cumulative histogram, cumulative histograms of images with exudates and images without exudates are obtained and averaged for each type, and the threshold cumulative histogram is set as the average of both cumulative histograms. Certain histogram bins are selected and thresholded according to the estimated threshold cumulative histogram, and the results are used for retinal image classification. In the second framework in this paper, a Convolutional Neural Network (CNN) is utilized to classify normal and abnormal cases.
Collapse
Affiliation(s)
- Noha A El-Hag
- Dept. of Electronics and Electrical Comm., Faculty of Engineering, Minia University, Minya, Egypt
| | - Ahmed Sedik
- Dept. of Robotics and intelligent machines, Faculty of artificial intelligent, Kafr elsheikh University, Kafr el-Sheikh, Egypt
| | - Walid El-Shafai
- Dept. of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Heba M El-Hoseny
- Dept. of Electronic and Electrical Communication Engineering, Al-Obour High Institute for Engineering and Technology, Egypt
| | - Ashraf A M Khalaf
- Dept. of Electronics and Electrical Comm., Faculty of Engineering, Minia University, Minya, Egypt
| | - Adel S El-Fishawy
- Dept. of Robotics and intelligent machines, Faculty of artificial intelligent, Kafr elsheikh University, Kafr el-Sheikh, Egypt
| | - Waleed Al-Nuaimy
- Dept. of Electrical and Electronic Engineering, University of Liverpool, Liverpool, UK
| | - Fathi E Abd El-Samie
- Dept. of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt.,Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Ghada M El-Banby
- Dept. Industrial electronics and control engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| |
Collapse
|
43
|
Sarhan MH, Nasseri MA, Zapp D, Maier M, Lohmann CP, Navab N, Eslami A. Machine Learning Techniques for Ophthalmic Data Processing: A Review. IEEE J Biomed Health Inform 2020; 24:3338-3350. [PMID: 32750971 DOI: 10.1109/jbhi.2020.3012134] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Machine learning and especially deep learning techniques are dominating medical image and data analysis. This article reviews machine learning approaches proposed for diagnosing ophthalmic diseases during the last four years. Three diseases are addressed in this survey, namely diabetic retinopathy, age-related macular degeneration, and glaucoma. The review covers over 60 publications and 25 public datasets and challenges related to the detection, grading, and lesion segmentation of the three considered diseases. Each section provides a summary of the public datasets and challenges related to each pathology and the current methods that have been applied to the problem. Furthermore, the recent machine learning approaches used for retinal vessels segmentation, and methods of retinal layers and fluid segmentation are reviewed. Two main imaging modalities are considered in this survey, namely color fundus imaging, and optical coherence tomography. Machine learning approaches that use eye measurements and visual field data for glaucoma detection are also included in the survey. Finally, the authors provide their views, expectations and the limitations of the future of these techniques in the clinical practice.
Collapse
|
44
|
Romero-Oraá R, García M, Oraá-Pérez J, López-Gálvez MI, Hornero R. Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6549. [PMID: 33207825 PMCID: PMC7698181 DOI: 10.3390/s20226549] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/07/2020] [Accepted: 11/13/2020] [Indexed: 06/11/2023]
Abstract
Diabetic retinopathy (DR) is characterized by the presence of red lesions (RLs), such as microaneurysms and hemorrhages, and bright lesions, such as exudates (EXs). Early DR diagnosis is paramount to prevent serious sight damage. Computer-assisted diagnostic systems are based on the detection of those lesions through the analysis of fundus images. In this paper, a novel method is proposed for the automatic detection of RLs and EXs. As the main contribution, the fundus image was decomposed into various layers, including the lesion candidates, the reflective features of the retina, and the choroidal vasculature visible in tigroid retinas. We used a proprietary database containing 564 images, randomly divided into a training set and a test set, and the public database DiaretDB1 to verify the robustness of the algorithm. Lesion detection results were computed per pixel and per image. Using the proprietary database, 88.34% per-image accuracy (ACCi), 91.07% per-pixel positive predictive value (PPVp), and 85.25% per-pixel sensitivity (SEp) were reached for the detection of RLs. Using the public database, 90.16% ACCi, 96.26% PPV_p, and 84.79% SEp were obtained. As for the detection of EXs, 95.41% ACCi, 96.01% PPV_p, and 89.42% SE_p were reached with the proprietary database. Using the public database, 91.80% ACCi, 98.59% PPVp, and 91.65% SEp were obtained. The proposed method could be useful to aid in the diagnosis of DR, reducing the workload of specialists and improving the attention to diabetic patients.
Collapse
Affiliation(s)
- Roberto Romero-Oraá
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - María García
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - Javier Oraá-Pérez
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
| | - María I. López-Gálvez
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, 47003 Valladolid, Spain
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Universidad de Valladolid, 47011 Valladolid, Spain
| | - Roberto Hornero
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
- Instituto de Investigación en Matemáticas (IMUVA), Universidad de Valladolid, 47011 Valladolid, Spain
| |
Collapse
|
45
|
Automatic detection of non-perfusion areas in diabetic macular edema from fundus fluorescein angiography for decision making using deep learning. Sci Rep 2020; 10:15138. [PMID: 32934283 PMCID: PMC7492239 DOI: 10.1038/s41598-020-71622-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 07/30/2020] [Indexed: 02/05/2023] Open
Abstract
Vision loss caused by diabetic macular edema (DME) can be prevented by early detection and laser photocoagulation. As there is no comprehensive detection technique to recognize NPA, we proposed an automatic detection method of NPA on fundus fluorescein angiography (FFA) in DME. The study included 3,014 FFA images of 221 patients with DME. We use 3 convolutional neural networks (CNNs), including DenseNet, ResNet50, and VGG16, to identify non-perfusion regions (NP), microaneurysms, and leakages in FFA images. The NPA was segmented using attention U-net. To validate its performance, we applied our detection algorithm on 249 FFA images in which the NPA areas were manually delineated by 3 ophthalmologists. For DR lesion classification, area under the curve is 0.8855 for NP regions, 0.9782 for microaneurysms, and 0.9765 for leakage classifier. The average precision of NP region overlap ratio is 0.643. NP regions of DME in FFA images are identified based a new automated deep learning algorithm. This study is an in-depth study from computer-aided diagnosis to treatment, and will be the theoretical basis for the application of intelligent guided laser.
Collapse
|
46
|
Tseng VS, Chen CL, Liang CM, Tai MC, Liu JT, Wu PY, Deng MS, Lee YW, Huang TY, Chen YH. Leveraging Multimodal Deep Learning Architecture with Retina Lesion Information to Detect Diabetic Retinopathy. Transl Vis Sci Technol 2020; 9:41. [PMID: 32855845 PMCID: PMC7424907 DOI: 10.1167/tvst.9.2.41] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 05/28/2020] [Indexed: 01/27/2023] Open
Abstract
Purpose To improve disease severity classification from fundus images using a hybrid architecture with symptom awareness for diabetic retinopathy (DR). Methods We used 26,699 fundus images of 17,834 diabetic patients from three Taiwanese hospitals collected in 2007 to 2018 for DR severity classification. Thirty-seven ophthalmologists verified the images using lesion annotation and severity classification as the ground truth. Two deep learning fusion architectures were proposed: late fusion, which combines lesion and severity classification models in parallel using a postprocessing procedure, and two-stage early fusion, which combines lesion detection and classification models sequentially and mimics the decision-making process of ophthalmologists. Messidor-2 was used with 1748 images to evaluate and benchmark the performance of the architecture. The primary evaluation metrics were classification accuracy, weighted κ statistic, and area under the receiver operating characteristic curve (AUC). Results For hospital data, a hybrid architecture achieved a good detection rate, with accuracy and weighted κ of 84.29% and 84.01%, respectively, for five-class DR grading. It also classified the images of early stage DR more accurately than conventional algorithms. The Messidor-2 model achieved an AUC of 97.09% in referral DR detection compared to AUC of 85% to 99% for state-of-the-art algorithms that learned from a larger database. Conclusions Our hybrid architectures strengthened and extracted characteristics from DR images, while improving the performance of DR grading, thereby increasing the robustness and confidence of the architectures for general use. Translational Relevance The proposed fusion architectures can enable faster and more accurate diagnosis of various DR pathologies than that obtained in current manual clinical practice.
Collapse
Affiliation(s)
- Vincent S Tseng
- Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan.,Institute of Data Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan
| | - Ching-Long Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Chang-Min Liang
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Ming-Cheng Tai
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jung-Tzu Liu
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Po-Yi Wu
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ming-Shan Deng
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ya-Wen Lee
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Teng-Yi Huang
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yi-Hao Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
47
|
Ellahham S. Artificial Intelligence: The Future for Diabetes Care. Am J Med 2020; 133:895-900. [PMID: 32325045 DOI: 10.1016/j.amjmed.2020.03.033] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 03/16/2020] [Accepted: 03/16/2020] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) is a fast-growing field and its applications to diabetes, a global pandemic, can reform the approach to diagnosis and management of this chronic condition. Principles of machine learning have been used to build algorithms to support predictive models for the risk of developing diabetes or its consequent complications. Digital therapeutics have proven to be an established intervention for lifestyle therapy in the management of diabetes. Patients are increasingly being empowered for self-management of diabetes, and both patients and health care professionals are benefitting from clinical decision support. AI allows a continuous and burden-free remote monitoring of the patient's symptoms and biomarkers. Further, social media and online communities enhance patient engagement in diabetes care. Technical advances have helped to optimize resource use in diabetes. Together, these intelligent technical reforms have produced better glycemic control with reductions in fasting and postprandial glucose levels, glucose excursions, and glycosylated hemoglobin. AI will introduce a paradigm shift in diabetes care from conventional management strategies to building targeted data-driven precision care.
Collapse
Affiliation(s)
- Samer Ellahham
- Cleveland Clinic, Lyndhurst, Ohio; Cleveland Clinic Abu Dhabi, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
48
|
Chang J, Lee J, Ha A, Han YS, Bak E, Choi S, Yun JM, Kang U, Shin IH, Shin JY, Ko T, Bae YS, Oh BL, Park KH, Park SM. Explaining the Rationale of Deep Learning Glaucoma Decisions with Adversarial Examples. Ophthalmology 2020; 128:78-88. [PMID: 32598951 DOI: 10.1016/j.ophtha.2020.06.036] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 06/14/2020] [Accepted: 06/15/2020] [Indexed: 12/22/2022] Open
Abstract
PURPOSE To illustrate what is inside the so-called black box of deep learning models (DLMs) so that clinicians can have greater confidence in the conclusions of artificial intelligence by evaluating adversarial explanation on its ability to explain the rationale of DLM decisions for glaucoma and glaucoma-related findings. Adversarial explanation generates adversarial examples (AEs), or images that have been changed to gain or lose pathologic characteristic-specific traits, to explain the DLM's rationale. DESIGN Evaluation of explanation methods for DLMs. PARTICIPANTS Health screening participants (n = 1653) at the Seoul National University Hospital Health Promotion Center, Seoul, Republic of Korea. METHODS We trained DLMs for referable glaucoma (RG), increased cup-to-disc ratio (ICDR), disc rim narrowing (DRN), and retinal nerve fiber layer defect (RNFLD) using 6430 retinal fundus images. Surveys consisting of explanations using AE and gradient-weighted class activation mapping (GradCAM), a conventional heatmap-based explanation method, were generated for 400 pathologic and healthy patient eyes. For each method, board-trained glaucoma specialists rated location explainability, the ability to pinpoint decision-relevant areas in the image, and rationale explainability, the ability to inform the user on the model's reasoning for the decision based on pathologic features. Scores were compared by paired Wilcoxon signed-rank test. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve (AUC), sensitivities, and specificities of DLMs; visualization of clinical pathologic changes of AEs; and survey scores for locational and rationale explainability. RESULTS The AUCs were 0.90, 0.99, 0.95, and 0.79 and sensitivities were 0.79, 1.00, 0.82, and 0.55 at 0.90 specificity for RG, ICDR, DRN, and RNFLD DLMs, respectively. Generated AEs showed valid clinical feature changes, and survey results for location explainability were 3.94 ± 1.33 and 2.55 ± 1.24 using AEs and GradCAMs, respectively, of a possible maximum score of 5 points. The scores for rationale explainability were 3.97 ± 1.31 and 2.10 ± 1.25 for AEs and GradCAM, respectively. Adversarial example provided significantly better explainability than GradCAM. CONCLUSIONS Adversarial explanation increased the explainability over GradCAM, a conventional heatmap-based explanation method. Adversarial explanation may help medical professionals understand more clearly the rationale of DLMs when using them for clinical decisions.
Collapse
Affiliation(s)
- Jooyoung Chang
- Department of Biomedical Sciences, Seoul National University Graduate School, Seoul, Republic of Korea
| | - Jinho Lee
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Hallym University Chuncheon Sacred Heart Hospital, Chuncheon, Republic of Korea
| | - Ahnul Ha
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Young Soo Han
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Eunoo Bak
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Seulggie Choi
- Department of Biomedical Sciences, Seoul National University Graduate School, Seoul, Republic of Korea
| | - Jae Moon Yun
- Department of Family Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Uk Kang
- InTheSmart Co., Ltd., Seoul, Republic of Korea; Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| | | | - Joo Young Shin
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | - Taehoon Ko
- Office of Hospital Information, Seoul National University Hospital, Seoul, Republic of Korea
| | - Ye Seul Bae
- Department of Family Medicine, Seoul National University Hospital, Seoul, Republic of Korea; Office of Hospital Information, Seoul National University Hospital, Seoul, Republic of Korea
| | - Baek-Lok Oh
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Ki Ho Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Ophthalmology, Seoul National University Hospital, Seoul, Republic of Korea.
| | - Sang Min Park
- Department of Biomedical Sciences, Seoul National University Graduate School, Seoul, Republic of Korea; Department of Family Medicine, Seoul National University Hospital, Seoul, Republic of Korea.
| |
Collapse
|
49
|
Pao SI, Lin HZ, Chien KH, Tai MC, Chen JT, Lin GM. Detection of Diabetic Retinopathy Using Bichannel Convolutional Neural Network. J Ophthalmol 2020; 2020:9139713. [PMID: 32655944 PMCID: PMC7322591 DOI: 10.1155/2020/9139713] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 05/18/2020] [Indexed: 01/14/2023] Open
Abstract
Deep learning of fundus photograph has emerged as a practical and cost-effective technique for automatic screening and diagnosis of severer diabetic retinopathy (DR). The entropy image of luminance of fundus photograph has been demonstrated to increase the detection performance for referable DR using a convolutional neural network- (CNN-) based system. In this paper, the entropy image computed by using the green component of fundus photograph is proposed. In addition, image enhancement by unsharp masking (UM) is utilized for preprocessing before calculating the entropy images. The bichannel CNN incorporating the features of both the entropy images of the gray level and the green component preprocessed by UM is also proposed to improve the detection performance of referable DR by deep learning.
Collapse
Affiliation(s)
- Shu-I Pao
- Department of Ophthalmology, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
| | - Hong-Zin Lin
- Department of Ophthalmology, Buddhist Tzu Chi General Hospital, Hualien 970, Taiwan
- Institute of Medical Sciences, Tzu Chi University, Hualien 970, Taiwan
| | - Ke-Hung Chien
- Department of Ophthalmology, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
- Department of Medicine, Hualien Armed Forces General Hospital, Hualien 971, Taiwan
| | - Ming-Cheng Tai
- Department of Ophthalmology, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
- Department of Medicine, Hualien Armed Forces General Hospital, Hualien 971, Taiwan
| | - Jiann-Torng Chen
- Department of Ophthalmology, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
| | - Gen-Min Lin
- Department of Medicine, Hualien Armed Forces General Hospital, Hualien 971, Taiwan
- Department of Medicine, Tri-Service General Hospital and National Defense Medical Center, Taipei 114, Taiwan
- Department of Preventive Medicine, Northwestern University, Chicago, IL 60611, USA
| |
Collapse
|
50
|
Monshi MMA, Poon J, Chung V. Deep learning in generating radiology reports: A survey. Artif Intell Med 2020; 106:101878. [PMID: 32425358 PMCID: PMC7227610 DOI: 10.1016/j.artmed.2020.101878] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 04/30/2020] [Accepted: 05/10/2020] [Indexed: 12/27/2022]
Abstract
Substantial progress has been made towards implementing automated radiology reporting models based on deep learning (DL). This is due to the introduction of large medical text/image datasets. Generating radiology coherent paragraphs that do more than traditional medical image annotation, or single sentence-based description, has been the subject of recent academic attention. This presents a more practical and challenging application and moves towards bridging visual medical features and radiologist text. So far, the most common approach has been to utilize publicly available datasets and develop DL models that integrate convolutional neural networks (CNN) for image analysis alongside recurrent neural networks (RNN) for natural language processing (NLP) and natural language generation (NLG). This is an area of research that we anticipate will grow in the near future. We focus our investigation on the following critical challenges: understanding radiology text/image structures and datasets, applying DL algorithms (mainly CNN and RNN), generating radiology text, and improving existing DL based models and evaluation metrics. Lastly, we include a critical discussion and future research recommendations. This survey will be useful for researchers interested in DL, particularly those interested in applying DL to radiology reporting.
Collapse
Affiliation(s)
- Maram Mahmoud A Monshi
- School of Computer Science, University of Sydney, Sydney, Australia; Department of Information Technology, Taif University, Taif, Saudi Arabia.
| | - Josiah Poon
- School of Computer Science, University of Sydney, Sydney, Australia
| | - Vera Chung
- School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|