1
|
Wei X, Liu Y, Zhang F, Geng L, Shan C, Cao X, Xiao Z. MSTNet: Multi-scale spatial-aware transformer with multi-instance learning for diabetic retinopathy classification. Med Image Anal 2025; 102:103511. [PMID: 40020421 DOI: 10.1016/j.media.2025.103511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 12/26/2024] [Accepted: 02/15/2025] [Indexed: 03/03/2025]
Abstract
Diabetic retinopathy (DR), the leading cause of vision loss among diabetic adults worldwide, underscores the importance of early detection and timely treatment using fundus images to prevent vision loss. However, existing deep learning methods struggle to capture the correlation and contextual information of subtle lesion features with the current scale of dataset. To this end, we propose a novel Multi-scale Spatial-aware Transformer Network (MSTNet) for DR classification. MSTNet encodes information from image patches at varying scales as input features, constructing a dual-pathway backbone network comprised of two Transformer encoders of different sizes to extract both local details and global context from images. To fully leverage structural prior knowledge, we introduce a Spatial-aware Module (SAM) to capture spatial local information within the images. Furthermore, considering the differences between medical and natural images, specifically that regions of interest in medical images often lack distinct subjectivity and continuity, we employ a Multiple Instance Learning (MIL) strategy to aggregate features from diverse regions, thereby enhancing correlation to subtle lesion areas. Ultimately, a cross-fusion classifier integrates dual-pathway features to produce the final classification result. We evaluate MSTNet on four public DR datasets, including APTOS2019, RFMiD2020, Messidor, and IDRiD. Extensive experiments demonstrate that MSTNet exhibits superior diagnostic and grading accuracy, achieving improvements of up to 2.0% in terms of ACC and 1.2% in terms of F1 score, highlighting its effectiveness in accurately assessing fundus images.
Collapse
Affiliation(s)
- Xin Wei
- School of Control Science and Engineering, Tiangong University, Tianjin 300387, China
| | - Yanbei Liu
- School of Life Sciences, Tiangong University, Tianjin 300387, China.
| | - Fang Zhang
- School of Life Sciences, Tiangong University, Tianjin 300387, China
| | - Lei Geng
- School of Life Sciences, Tiangong University, Tianjin 300387, China
| | - Chunyan Shan
- Chu Hsien-I Memorial Hospital, Tianjin Medical University, Tianjin 300134, China; NHC Key Laboratory of Hormones and Development, Tianjin, China
| | - Xiangyu Cao
- Department of Neurology, Chinese PLA General Hospital, Beijing, China
| | - Zhitao Xiao
- School of Life Sciences, Tiangong University, Tianjin 300387, China.
| |
Collapse
|
2
|
Rif'atunnailah MI, Mei-Chan C, Wan Ling L, Tajunisah I, Mohd Iman SS, Thandar Soe Sumaiyah J, Nurul Afieda R. The outcome of diabetic retinopathy health education program in patients with type 2 diabetes mellitus: a quasi-experimental study. HEALTH EDUCATION RESEARCH 2025; 40:cyae045. [PMID: 39820426 DOI: 10.1093/her/cyae045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Revised: 09/10/2024] [Accepted: 01/04/2025] [Indexed: 01/19/2025]
Abstract
Diabetic retinopathy (DR) may develop into sight-threatening DR and vision loss if early intervention is not carried out. This study was aimed to assess the effectiveness of DR health education program for patients with type 2 diabetes mellitus (T2DM). The quasi-experimental research design was applied. The intervention group underwent a web-based DR health education program while the control group was followed up the usual way at an ophthalmology clinic for 1 year. Data were analysed using descriptive statistics, repeated measures ANOVA and general linear model to evaluate the mean difference between groups. A total of 180 patients with T2DM were enrolled in the study, with equal number in the control and intervention groups, respectively, with 28% of dropout rate. There was a significant mean difference in knowledge score [F (1178) = 116.57, P = 0.001], diabetes self-care [F (1178) = 116.57, P = 0.001] and visual-related quality of life [F (1178) = 12.70), P = 0.001] between the control and the intervention groups. The intervention group scored the highest in all three categories. Educational interventions positively affected adherence to self-care and visual-related quality of life in type 2 diabetic patients as shown in this study. DRHEP should be considered an added benefit in T2DM management, starting with comprehensive care enrollment.
Collapse
Affiliation(s)
- Mat Isa Rif'atunnailah
- Department of Nursing Science, Faculty of Medicine, Universiti of Malaya, Kuala Lumpur 50603, Malaysia
- Department of Professional Nursing Studies, Kulliyyah of Nursing, International Islamic University Malaysia, Bandar Indera Mahkota Campus, Kuantan, Pahang 25200, Malaysia
| | - Chong Mei-Chan
- Department of Nursing Science, Faculty of Medicine, Universiti of Malaya, Kuala Lumpur 50603, Malaysia
| | - Lee Wan Ling
- Department of Nursing Science, Faculty of Medicine, Universiti of Malaya, Kuala Lumpur 50603, Malaysia
| | - Iqbal Tajunisah
- Department of Ophthalmology, Faculty of Medicine, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Saiful Suhardi Mohd Iman
- Department of Emergency Medicine, Kulliyyah of Medicine, International Islamic University Malaysia, Bandar Indera Mahkota Campus, Kuantan, Pahang 25200, Malaysia
| | - Jamaludin Thandar Soe Sumaiyah
- Department of Medical Surgical Nursing, Kulliyyah of Nursing, International Islamic University Malaysia, Bandar Indera Mahkota Campus, Kuantan, Pahang 25200, Malaysia
| | - Roslim Nurul Afieda
- Faculty of Health Sciences, Universiti Sultan Zainal Abidin (UniSZA), Kampus Gong Badak, Kuala Nerus, Terengganu 21300, Malaysia
| |
Collapse
|
3
|
Li Y, Yu B, Si M, Yang M, Cui W, Zhou Y, Fu S, Wang H, Liu X, Zhang H. Enhancing diabetic retinopathy diagnosis: automatic segmentation of hyperreflective foci in OCT via deep learning. Int Ophthalmol 2025; 45:79. [PMID: 39966317 PMCID: PMC11909028 DOI: 10.1007/s10792-025-03439-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 01/27/2025] [Indexed: 02/20/2025]
Abstract
OBJECTIVE Hyperreflective foci (HRF) are small, punctate lesions ranging from 20 to 50 μ m and exhibiting high reflective intensity in optical coherence tomography (OCT) images of patients with diabetic retinopathy (DR). The purpose of the model proposed in this paper is to precisely identify and segment the HRF in OCT images of patients with DR. This method is essential for assisting ophthalmologists in the early diagnosis and assessing the effectiveness of treatment and prognosis. In this study, we introduce an HRF segmentation algorithm based on KiU-Net, the algorithm that comprises the Kite-Net branch using up-sampling coding to collect more detailed information and a three-layer U-Net branch to extract high-level semantic information. To enhance the capacity of a single-branch network, we also design a cross-attention block (CAB) which combines the information extracted from two branches. The experimental results demonstrate that the number of parameters of our model is significantly reduced, and the sensitivity (SE) and the dice similarity coefficient (DSC) are respectively improved to 72.90 % and 66.84 % . Considering the SE and precision(P) of the segmentation, as well as the recall ratio and recall P of HRF, we believe that this model outperforms most advanced medical image segmentation algorithms and significantly relieves the strain on ophthalmologists. PURPOSE Hyperreflective foci (HRF) are small, punctate lesions ranging from 20 to 50 μm with high reflective intensity in optical coherence tomography (OCT) images of patients with diabetic retinopathy (DR). This study aims to develop a model that precisely identifies and segments HRF in OCT images of DR patients. Accurate segmentation of HRF is essential for assisting ophthalmologists in early diagnosis and in assessing the effectiveness of treatment and prognosis. METHODS We introduce an HRF segmentation algorithm based on the KiU-Net architecture. The model comprises two branches: a Kite-Net branch that uses up-sampling coding to capture detailed information, and a three-layer U-Net branch that extracts high-level semantic information. To enhance the capacity of the network, we designed a cross-attention block (CAB) that combines the information extracted from both branches, effectively integrating detail and semantic features. RESULTS Experimental results demonstrate that our model significantly reduces the number of parameters while improving performance. The sensitivity (SE) and Dice Similarity Coefficient (DSC) of our model are improved to 72.90% and 66.84%, respectively. Considering the SE and precision (P) of the segmentation, as well as the recall ratio and precision of HRF detection, our model outperforms most advanced medical image segmentation algorithms CONCLUSION: The proposed HRF segmentation algorithm effectively identifies and segments HRF in OCT images of DR patients, outperforming existing methods. This advancement can significantly alleviate the burden on ophthalmologists by aiding in early diagnosis and treatment evaluation, ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Yixiao Li
- Department of Ophthalmology, Shandong Provincial Hospital, Cheeloo College of Medicine, Shandong University, Jinan, 250012, Shandong Province, China
| | - Boyu Yu
- Chang Guang Satellite Technology Co. Ltd, Changchun, 130102, Jilin Province, China
| | - Mingwei Si
- Department of Ophthalmology, Qilu Hospital of Shandong University, Jinan, 250012, Shandong Province, China
| | - Mengyao Yang
- Department of Ophthalmology, Qilu Hospital of Shandong University, Jinan, 250012, Shandong Province, China
| | - Wenxuan Cui
- Department of Ophthalmology, Qilu Hospital of Shandong University, Jinan, 250012, Shandong Province, China
| | - Yi Zhou
- Department of Ophthalmology, Xuzhou Medical University, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221002, Jiangsu Province, China
| | - Shujun Fu
- School of Mathematics, Shandong University, Jinan, 250100, Shandong Province, China.
| | - Hong Wang
- Department of Ophthalmology, Qilu Hospital of Shandong University, Jinan, 250012, Shandong Province, China.
| | - Xuya Liu
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, 250101, Shandong Province, China.
| | - Han Zhang
- Department of Ophthalmology, Shandong Provincial Hospital, Cheeloo College of Medicine, Shandong University, Jinan, 250012, Shandong Province, China.
| |
Collapse
|
4
|
Matten P, Scherer J, Schlegl T, Nienhaus J, Stino H, Niederleithner M, Schmidt-Erfurth UM, Leitgeb RA, Drexler W, Pollreisz A, Schmoll T. Multiple instance learning based classification of diabetic retinopathy in weakly-labeled widefield OCTA en face images. Sci Rep 2023; 13:8713. [PMID: 37248309 DOI: 10.1038/s41598-023-35713-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 05/22/2023] [Indexed: 05/31/2023] Open
Abstract
Diabetic retinopathy (DR), a pathologic change of the human retinal vasculature, is the leading cause of blindness in working-age adults with diabetes mellitus. Optical coherence tomography angiography (OCTA), a functional extension of optical coherence tomography, has shown potential as a tool for early diagnosis of DR through its ability to visualize the retinal vasculature in all spatial dimensions. Previously introduced deep learning-based classifiers were able to support the detection of DR in OCTA images, but require expert labeling at the pixel level, a labor-intensive and expensive process. We present a multiple instance learning-based network, MIL-ResNet,14 that is capable of detecting biomarkers in an OCTA dataset with high accuracy, without the need for annotations other than the information whether a scan is from a diabetic patient or not. The dataset we used for this study was acquired with a diagnostic ultra-widefield swept-source OCT device with a MHz A-scan rate. We were able to show that our proposed method outperforms previous state-of-the-art networks for this classification task, ResNet14 and VGG16. In addition, our network pays special attention to clinically relevant biomarkers and is robust against adversarial attacks. Therefore, we believe that it could serve as a powerful diagnostic decision support tool for clinical ophthalmic screening.
Collapse
Affiliation(s)
- Philipp Matten
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria.
| | - Julius Scherer
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Thomas Schlegl
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Jonas Nienhaus
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Heiko Stino
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Michael Niederleithner
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Ursula M Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Rainer A Leitgeb
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Wolfgang Drexler
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Andreas Pollreisz
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Tilman Schmoll
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
- Carl Zeiss Meditec Inc, 5300 Central Pkwy, Dublin, CA, 94568, USA
| |
Collapse
|
5
|
Abdelmotaal H, Hazarbasanov R, Taneri S, Al-Timemy A, Lavric A, Takahashi H, Yousefi S. Detecting dry eye from ocular surface videos based on deep learning. Ocul Surf 2023; 28:90-98. [PMID: 36708879 DOI: 10.1016/j.jtos.2023.01.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 09/28/2022] [Accepted: 01/16/2023] [Indexed: 01/27/2023]
Abstract
OBJECTIVE To assess the performance of convolutional neural networks (CNNs) for automated diagnosis of dry eye (DE) in patients undergoing video keratoscopy based on single ocular surface video frames. DESIGN This retrospective cohort study included 244 ocular surface videos from 244 eyes of 244 subjects based on corneal topography. A total of 116 eyes were normal while 128 eyes had DE based on clinical evaluations. METHODS We developed a deep transfer learning model to directly identify DE from ocular surface videos. We evaluated the performance of the CNN model based on objective accuracy metrics. We assessed the clinical relevance of the findings by evaluating class activations maps. MAIN OUTCOME MEASURE Area under the receiver operating characteristics curve (AUC), accuracy, specificity, and sensitivity. RESULTS The AUC of the model for discriminating normal eyes from eyes with DE was 0.98. Network activation maps suggested that the lower paracentral cornea was the most important region for detection of DE by the CNN model. CONCLUSIONS Deep transfer learning achieved a high diagnostic accuracy in detecting DE based on non-invasive ocular surface videos at levels that may prove useful in clinical practice.
Collapse
Affiliation(s)
| | - Rossen Hazarbasanov
- Hospital de Olhos-CRO, Guarulhos, SP, Brazil; Department of Ophthalmology and Visual Sciences, Paulista Medical School, Federal University of São Paulo, São Paulo, Brazil.
| | - Suphi Taneri
- Ruhr University, Bochum, Germany; Zentrum für Refraktive Chirurgie, Muenster, Germany
| | - Ali Al-Timemy
- Biomedical Engineering Department, Al-Khwarizmi College of Engineering, University of Baghdad, Iraq; Centre for Robotics and Neural Systems (CRNS), Cognitive Institute, School of Engineering, Computing and Mathematics, Plymouth University, UK
| | - Alexandru Lavric
- Computers, Electronics and Automation Department, Stefan cel Mare University of Suceava, Romania
| | | | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA; Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA.
| |
Collapse
|
6
|
Harikiran J, Chandana BS, Rao BS, Raviteja B. Ocular disease examination of fundus images by hybriding SFCNN and rule mining algorithms. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Affiliation(s)
- J. Harikiran
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Sai Chandana
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Srinivasa Rao
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Raviteja
- Department of Computer Science and Engineering, GITAM Deemed to be University, Visakhapatnam, India
| |
Collapse
|
7
|
Deep Learning and Medical Image Processing Techniques for Diabetic Retinopathy: A Survey of Applications, Challenges, and Future Trends. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:2728719. [PMID: 36776951 PMCID: PMC9911247 DOI: 10.1155/2023/2728719] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/28/2022] [Accepted: 11/25/2022] [Indexed: 02/05/2023]
Abstract
Diabetic retinopathy (DR) is a common eye retinal disease that is widely spread all over the world. It leads to the complete loss of vision based on the level of severity. It damages both retinal blood vessels and the eye's microscopic interior layers. To avoid such issues, early detection of DR is essential in association with routine screening methods to discover mild causes in manual initiation. But these diagnostic procedures are extremely difficult and expensive. The unique contributions of the study include the following: first, providing detailed background of the DR disease and the traditional detection techniques. Second, the various imaging techniques and deep learning applications in DR are presented. Third, the different use cases and real-life scenarios are explored relevant to DR detection wherein deep learning techniques have been implemented. The study finally highlights the potential research opportunities for researchers to explore and deliver effective performance results in diabetic retinopathy detection.
Collapse
|
8
|
Elgafi M, Sharafeldeen A, Elnakib A, Elgarayhi A, Alghamdi NS, Sallah M, El-Baz A. Detection of Diabetic Retinopathy Using Extracted 3D Features from OCT Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:7833. [PMID: 36298186 PMCID: PMC9610651 DOI: 10.3390/s22207833] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 10/06/2022] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
Diabetic retinopathy (DR) is a major health problem that can lead to vision loss if not treated early. In this study, a three-step system for DR detection utilizing optical coherence tomography (OCT) is presented. First, the proposed system segments the retinal layers from the input OCT images. Second, 3D features are extracted from each retinal layer that include the first-order reflectivity and the 3D thickness of the individual OCT layers. Finally, backpropagation neural networks are used to classify OCT images. Experimental studies on 188 cases confirm the advantages of the proposed system over related methods, achieving an accuracy of 96.81%, using the leave-one-subject-out (LOSO) cross-validation. These outcomes show the potential of the suggested method for DR detection using OCT images.
Collapse
Affiliation(s)
- Mahmoud Elgafi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Ahmed Sharafeldeen
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ahmed Elnakib
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ahmed Elgarayhi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Norah S. Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Mohammed Sallah
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
- Higher Institute of Engineering and Technology, New Damietta 34517, Egypt
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
9
|
Classification of diabetic retinopathy with feature selection over deep features using nature-inspired wrapper methods. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
10
|
Nadeem MW, Goh HG, Hussain M, Liew SY, Andonovic I, Khan MA. Deep Learning for Diabetic Retinopathy Analysis: A Review, Research Challenges, and Future Directions. SENSORS (BASEL, SWITZERLAND) 2022; 22:6780. [PMID: 36146130 PMCID: PMC9505428 DOI: 10.3390/s22186780] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/02/2022] [Accepted: 08/08/2022] [Indexed: 05/12/2023]
Abstract
Deep learning (DL) enables the creation of computational models comprising multiple processing layers that learn data representations at multiple levels of abstraction. In the recent past, the use of deep learning has been proliferating, yielding promising results in applications across a growing number of fields, most notably in image processing, medical image analysis, data analysis, and bioinformatics. DL algorithms have also had a significant positive impact through yielding improvements in screening, recognition, segmentation, prediction, and classification applications across different domains of healthcare, such as those concerning the abdomen, cardiac, pathology, and retina. Given the extensive body of recent scientific contributions in this discipline, a comprehensive review of deep learning developments in the domain of diabetic retinopathy (DR) analysis, viz., screening, segmentation, prediction, classification, and validation, is presented here. A critical analysis of the relevant reported techniques is carried out, and the associated advantages and limitations highlighted, culminating in the identification of research gaps and future challenges that help to inform the research community to develop more efficient, robust, and accurate DL models for the various challenges in the monitoring and diagnosis of DR.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Hock Guan Goh
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan
| | - Soung-Yue Liew
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Ivan Andonovic
- Department of Electronic and Electrical Engineering, Royal College Building, University of Strathclyde, 204 George St., Glasgow G1 1XW, UK
| | - Muhammad Adnan Khan
- Pattern Recognition and Machine Learning Lab, Department of Software, Gachon University, Seongnam 13557, Korea
- Faculty of Computing, Riphah School of Computing and Innovation, Riphah International University, Lahore Campus, Lahore 54000, Pakistan
| |
Collapse
|
11
|
The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey. Bioengineering (Basel) 2022; 9:bioengineering9080366. [PMID: 36004891 PMCID: PMC9405367 DOI: 10.3390/bioengineering9080366] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 07/28/2022] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications.
Collapse
|
12
|
Fang L, Qiao H. Diabetic retinopathy classification using a novel DAG network based on multi-feature of fundus images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103810] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
13
|
Miao J, Yu J, Zou W, Su N, Peng Z, Wu X, Huang J, Fang Y, Yuan S, Xie P, Huang K, Chen Q, Hu Z, Liu Q. Deep Learning Models for Segmenting Non-perfusion Area of Color Fundus Photographs in Patients With Branch Retinal Vein Occlusion. Front Med (Lausanne) 2022; 9:794045. [PMID: 35847781 PMCID: PMC9279621 DOI: 10.3389/fmed.2022.794045] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 05/30/2022] [Indexed: 11/17/2022] Open
Abstract
Purpose To develop artificial intelligence (AI)-based deep learning (DL) models for automatically detecting the ischemia type and the non-perfusion area (NPA) from color fundus photographs (CFPs) of patients with branch retinal vein occlusion (BRVO). Methods This was a retrospective analysis of 274 CFPs from patients diagnosed with BRVO. All DL models were trained using a deep convolutional neural network (CNN) based on 45 degree CFPs covering the fovea and the optic disk. We first trained a DL algorithm to identify BRVO patients with or without the necessity of retinal photocoagulation from 219 CFPs and validated the algorithm on 55 CFPs. Next, we trained another DL algorithm to segment NPA from 104 CFPs and validated it on 29 CFPs, in which the NPA was manually delineated by 3 experienced ophthalmologists according to fundus fluorescein angiography. Both DL models have been cross-validated 5-fold. The recall, precision, accuracy, and area under the curve (AUC) were used to evaluate the DL models in comparison with three types of independent ophthalmologists of different seniority. Results In the first DL model, the recall, precision, accuracy, and area under the curve (AUC) were 0.75 ± 0.08, 0.80 ± 0.07, 0.79 ± 0.02, and 0.82 ± 0.03, respectively, for predicting the necessity of laser photocoagulation for BRVO CFPs. The second DL model was able to segment NPA in CFPs of BRVO with an AUC of 0.96 ± 0.02. The recall, precision, and accuracy for segmenting NPA was 0.74 ± 0.05, 0.87 ± 0.02, and 0.89 ± 0.02, respectively. The performance of the second DL model was nearly comparable with the senior doctors and significantly better than the residents. Conclusion These results indicate that the DL models can directly identify and segment retinal NPA from the CFPs of patients with BRVO, which can further guide laser photocoagulation. Further research is needed to identify NPA of the peripheral retina in BRVO, or other diseases, such as diabetic retinopathy.
Collapse
Affiliation(s)
- Jinxin Miao
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Jiale Yu
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Wenjun Zou
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- Department of Ophthalmology, The Affiliated Wuxi No.2 People's Hospital of Nanjing Medical University, Wuxi, China
| | - Na Su
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Zongyi Peng
- The First School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Xinjing Wu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Junlong Huang
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yuan Fang
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Ping Xie
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Kun Huang
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Zizhong Hu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- *Correspondence: Qinghuai Liu
| | - Qinghuai Liu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- Zizhong Hu
| |
Collapse
|
14
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
15
|
Shi C, Lee J, Wang G, Dou X, Yuan F, Zee B. Assessment of image quality on color fundus retinal images using the automatic retinal image analysis. Sci Rep 2022; 12:10455. [PMID: 35729197 PMCID: PMC9213403 DOI: 10.1038/s41598-022-13919-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 05/30/2022] [Indexed: 01/03/2023] Open
Abstract
Image quality assessment is essential for retinopathy detection on color fundus retinal image. However, most studies focused on the classification of good and poor quality without considering the different types of poor quality. This study developed an automatic retinal image analysis (ARIA) method, incorporating transfer net ResNet50 deep network with the automatic features generation approach to automatically assess image quality, and distinguish eye-abnormality-associated-poor-quality from artefact-associated-poor-quality on color fundus retinal images. A total of 2434 retinal images, including 1439 good quality and 995 poor quality (483 eye-abnormality-associated-poor-quality and 512 artefact-associated-poor-quality), were used for training, testing, and 10-ford cross-validation. We also analyzed the external validation with the clinical diagnosis of eye abnormality as the reference standard to evaluate the performance of the method. The sensitivity, specificity, and accuracy for testing good quality against poor quality were 98.0%, 99.1%, and 98.6%, and for differentiating between eye-abnormality-associated-poor-quality and artefact-associated-poor-quality were 92.2%, 93.8%, and 93.0%, respectively. In external validation, our method achieved an area under the ROC curve of 0.997 for the overall quality classification and 0.915 for the classification of two types of poor quality. The proposed approach, ARIA, showed good performance in testing, 10-fold cross validation and external validation. This study provides a novel angle for image quality screening based on the different poor quality types and corresponding dealing methods. It suggested that the ARIA can be used as a screening tool in the preliminary stage of retinopathy grading by telemedicine or artificial intelligence analysis.
Collapse
Affiliation(s)
- Chuying Shi
- Division of Biostatistics, Centre for Clinical Research and Biostatistics, Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong, China
| | - Jack Lee
- Division of Biostatistics, Centre for Clinical Research and Biostatistics, Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong, China
| | - Gechun Wang
- Department of Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Xinyan Dou
- Department of Ophthalmology, Wusong Hospital, Shanghai, China
| | - Fei Yuan
- Department of Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Benny Zee
- Division of Biostatistics, Centre for Clinical Research and Biostatistics, Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong, China.
| |
Collapse
|
16
|
OLTU B, KARACA BK, ERDEM H, ÖZGÜR A. A systematic review of transfer learning-based approaches for diabetic retinopathy detection. GAZI UNIVERSITY JOURNAL OF SCIENCE 2022. [DOI: 10.35378/gujs.1081546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cases of diabetes and related diabetic retinopathy (DR) have been increasing at an alarming rate in modern times. Early detection of DR is an important problem since it may cause permanent blindness in the late stages. In the last two decades, many different approaches have been applied in DR detection. Reviewing academic literature shows that deep neural networks (DNNs) have become the most preferred approach for DR detection. Among these DNN approaches, Convolutional Neural Network (CNN) models are the most used ones in the field of medical image classification. Designing a new CNN architecture is a tedious and time-consuming approach. Additionally, training an enormous number of parameters is also a difficult task. Due to this reason, instead of training CNNs from scratch, using pre-trained models has been suggested in recent years as transfer learning approach. Accordingly, the present study as a review focuses on DNN and Transfer Learning based applications of DR detection considering 43 publications between 2015 and 2021. The published papers are summarized using 3 figures and 10 tables, giving information about 29 pre-trained CNN models, 13 DR data sets and standard performance metrics.
Collapse
Affiliation(s)
- Burcu OLTU
- BAŞKENT ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ
| | | | | | | |
Collapse
|
17
|
Gunasekaran K, Pitchai R, Chaitanya GK, Selvaraj D, Annie Sheryl S, Almoallim HS, Alharbi SA, Raghavan SS, Tesemma BG. A Deep Learning Framework for Earlier Prediction of Diabetic Retinopathy from Fundus Photographs. BIOMED RESEARCH INTERNATIONAL 2022; 2022:3163496. [PMID: 35711528 PMCID: PMC9197616 DOI: 10.1155/2022/3163496] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 04/27/2022] [Accepted: 05/11/2022] [Indexed: 11/17/2022]
Abstract
Diabetic patients can also be identified immediately utilizing retinopathy photos, but it is a challenging task. The blood veins visible in fundus photographs are used in several disease diagnosis approaches. We sought to replicate the findings published in implementation and verification of a deep learning approach for diabetic retinopathy identification in retinal fundus pictures. To address this issue, the suggested investigative study uses recurrent neural networks (RNN) to retrieve characteristics from deep networks. As a result, using computational approaches to identify certain disorders automatically might be a fantastic solution. We developed and tested several iterations of a deep learning framework to forecast the progression of diabetic retinopathy in diabetic individuals who have undergone teleretinal diabetic retinopathy assessment in a basic healthcare environment. A collection of one-field or three-field colour fundus pictures served as the input for both iterations. Utilizing the proposed DRNN methodology, advanced identification of the diabetic state was performed utilizing HE detected in an eye's blood vessel. This research demonstrates the difficulties in duplicating deep learning approach findings, as well as the necessity for more reproduction and replication research to verify deep learning techniques, particularly in the field of healthcare picture processing. This development investigates the utilization of several other Deep Neural Network Frameworks on photographs from the dataset after they have been treated to suitable image computation methods such as local average colour subtraction to assist in highlighting the germane characteristics from a fundoscopy, thus, also enhancing the identification and assessment procedure of diabetic retinopathy and serving as a skilled guidelines framework for practitioners all over the globe.
Collapse
Affiliation(s)
- K. Gunasekaran
- Department of Computer Science and Engineering, Sri Indu College of Engineering and Technology, Hyderabad, Telangana 501510, India
| | - R. Pitchai
- Department of Computer Science and Engineering, B V Raju Institute of Technology, Narsapur, Telangana 502313, India
| | - Gogineni Krishna Chaitanya
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh 522502, India
| | - D. Selvaraj
- Department of Electronics and Communication Engineering, Panimalar Engineering College, Chennai, Tamil Nadu 600123, India
| | - S. Annie Sheryl
- Department of Computer Science and Engineering, Panimalar Institute of Technology, Chennai, Tamil Nadu 600123, India
| | - Hesham S. Almoallim
- Department of Oral and Maxillofacial Surgery, College of Dentistry, King Saud University, PO Box-60169, Riyadh-11545, Saudi Arabia
| | - Sulaiman Ali Alharbi
- Department of Botany and Microbiology, College of Science, King Saud University, PO Box-2455, Riyadh-11451, Saudi Arabia
| | - S. S. Raghavan
- Department of Microbiology, University of Texas Health and Science Center at Tyler, Tyler-75703, TX, USA
| | | |
Collapse
|
18
|
GÜRCAN ÖF, ATICI U, BEYCA ÖF. A Hybrid Deep Learning-Metaheuristic Model for Diagnosis of Diabetic Retinopathy. GAZI UNIVERSITY JOURNAL OF SCIENCE 2022. [DOI: 10.35378/gujs.919572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
International Diabetes Federation (IDF) reports that diabetes is one of the rapidly growing illnesses. About 463 million adults between 20-79 years have diabetes. There are also millions of undiagnosed patients. It is estimated that there will be about 578 million diabetics by 2030 [1]. Diabetes reasons different eye diseases. Diabetic retinopathy (DR) is one of them and is also one of the most common vision loss or blindness worldwide. DR progresses slowly and has few indicators in the early stages. It makes the diagnosis of DR a problematic task. Automated systems promise to support the diagnosis of DR. Many deep learning-based models have been developed for DR classification. This study aims to support ophthalmologists in the diagnosis process and increase the diagnosis performance of DR through a hybrid model. A publicly available Messidor-2 dataset was used in this study, comprised of retinal images. In the proposed model, first, images were pre-processed and a deep learning model, namely, InceptionV3 was used in feature extraction where a transfer learning approach is applied. Next, the number of features in obtained feature vectors was decreased with feature selection by Simulated Annealing (SA). Lastly, the best representation features were used in XGBoost model. The XGBoost algorithm gives an accuracy of 92.26% in a binary classification task. This study shows that a pre-trained ConvNet with a metaheuristic algorithm for feature selection gives a satisfactory result in the diagnosis of DR.
Collapse
Affiliation(s)
| | | | - Ömer Faruk BEYCA
- İstanbul Technical University, Department of Industrial Engineering
| |
Collapse
|
19
|
Elsharkawy M, Elrazzaz M, Sharafeldeen A, Alhalabi M, Khalifa F, Soliman A, Elnakib A, Mahmoud A, Ghazal M, El-Daydamony E, Atwan A, Sandhu HS, El-Baz A. The Role of Different Retinal Imaging Modalities in Predicting Progression of Diabetic Retinopathy: A Survey. SENSORS (BASEL, SWITZERLAND) 2022; 22:3490. [PMID: 35591182 PMCID: PMC9101725 DOI: 10.3390/s22093490] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/28/2022] [Accepted: 04/29/2022] [Indexed: 06/15/2023]
Abstract
Diabetic retinopathy (DR) is a devastating condition caused by progressive changes in the retinal microvasculature. It is a leading cause of retinal blindness in people with diabetes. Long periods of uncontrolled blood sugar levels result in endothelial damage, leading to macular edema, altered retinal permeability, retinal ischemia, and neovascularization. In order to facilitate rapid screening and diagnosing, as well as grading of DR, different retinal modalities are utilized. Typically, a computer-aided diagnostic system (CAD) uses retinal images to aid the ophthalmologists in the diagnosis process. These CAD systems use a combination of machine learning (ML) models (e.g., deep learning (DL) approaches) to speed up the diagnosis and grading of DR. In this way, this survey provides a comprehensive overview of different imaging modalities used with ML/DL approaches in the DR diagnosis process. The four imaging modalities that we focused on are fluorescein angiography, fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). In addition, we discuss limitations of the literature that utilizes such modalities for DR diagnosis. In addition, we introduce research gaps and provide suggested solutions for the researchers to resolve. Lastly, we provide a thorough discussion about the challenges and future directions of the current state-of-the-art DL/ML approaches. We also elaborate on how integrating different imaging modalities with the clinical information and demographic data will lead to promising results for the scientists when diagnosing and grading DR. As a result of this article's comparative analysis and discussion, it remains necessary to use DL methods over existing ML models to detect DR in multiple modalities.
Collapse
Affiliation(s)
- Mohamed Elsharkawy
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Mostafa Elrazzaz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Marah Alhalabi
- Electrical, Computer and Biomedical Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.); (M.A.)
| | - Fahmi Khalifa
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Soliman
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Elnakib
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Mohammed Ghazal
- Electrical, Computer and Biomedical Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.); (M.A.)
| | - Eman El-Daydamony
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (E.E.-D.); (A.A.)
| | - Ahmed Atwan
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (E.E.-D.); (A.A.)
| | - Harpal Singh Sandhu
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| |
Collapse
|
20
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Ricquebourg V, Benyoussef AA, Massin P, Rottier JB, Cochener B, Quellec G. Automatic Screening for Ocular Anomalies Using Fundus Photographs. Optom Vis Sci 2022; 99:281-291. [PMID: 34897234 DOI: 10.1097/opx.0000000000001845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE Screening for ocular anomalies using fundus photography is key to prevent vision impairment and blindness. With the growing and aging population, automated algorithms that can triage fundus photographs and provide instant referral decisions are relevant to scale-up screening and face the shortage of ophthalmic expertise. PURPOSE This study aimed to develop a deep learning algorithm that detects any ocular anomaly in fundus photographs and to evaluate this algorithm for "normal versus anomalous" eye examination classification in the diabetic and general populations. METHODS The deep learning algorithm was developed and evaluated in two populations: the diabetic and general populations. Our patient cohorts consist of 37,129 diabetic patients from the OPHDIAT diabetic retinopathy screening network in Paris, France, and 7356 general patients from the OphtaMaine private screening network, in Le Mans, France. Each data set was divided into a development subset and a test subset of more than 4000 examinations each. For ophthalmologist/algorithm comparison, a subset of 2014 examinations from the OphtaMaine test subset was labeled by a second ophthalmologist. First, the algorithm was trained on the OPHDIAT development subset. Then, it was fine-tuned on the OphtaMaine development subset. RESULTS On the OPHDIAT test subset, the area under the receiver operating characteristic curve for normal versus anomalous classification was 0.9592. On the OphtaMaine test subset, the area under the receiver operating characteristic curve was 0.8347 before fine-tuning and 0.9108 after fine-tuning. On the ophthalmologist/algorithm comparison subset, the second ophthalmologist achieved a specificity of 0.8648 and a sensitivity of 0.6682. For the same specificity, the fine-tuned algorithm achieved a sensitivity of 0.8248. CONCLUSIONS The proposed algorithm compares favorably with human performance for normal versus anomalous eye examination classification using fundus photography. Artificial intelligence, which previously targeted a few retinal pathologies, can be used to screen for ocular anomalies comprehensively.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Pascale Massin
- Ophtalmology Department, Lariboisière Hospital, APHP, Paris, France
| | | | | | | |
Collapse
|
21
|
|
22
|
Jiewei Y, Jingjing Z, Jingjing X, Guilan Z. Downregulation of circ-UBAP2 ameliorates oxidative stress and dysfunctions of human retinal microvascular endothelial cells (hRMECs) via miR-589-5p/EGR1 axis. Bioengineered 2021; 12:7508-7518. [PMID: 34608841 PMCID: PMC8806621 DOI: 10.1080/21655979.2021.1979440] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Hsa_circ_0001850_circ_0001850 (circ-UBAP2) is reported to be upregulated in diabetic retinopathy (DR). However, its role in high glucose (HG)-triggered oxidative stress and vascular dysfunction in DR is unclear. This study aimed to investigate the potential of circUBAP2 in DR. The content of malondialdehyde (MDA), and the activities of superoxide dismutase (SOD) and glutathione peroxidase (GSH-PX) were analyzed using the corresponding kits. Western blotting was performed to detect the protein expression of Nrf2, HO-1, and SOD-1. MTT assay was conducted to assess cell viability. A transwell migration assay was used to determine the migration ability of human retinal microvascular endothelial cells (hRMECs). A Matrigel tube formation assay was performed to analyze tube formation. The targeting relationships were verified using a luciferase reporter assay. We found that the circ-UBAP2 expression increased in DR patients and HG-treated hRMECs. Downregulation of circ-UBAP2 ameliorated HG-induced oxidative stress and dysfunction of hRMECs. Mechanistically, circ-UBAP2 sponges miR-589-5p, which is downregulated under hyperglycemic conditions. In addition, EGR1 was confirmed to be a target gene of miR-589-5p and was overexpressed in HG-treated hRMECs. In addition, EGR1 reversed the effects of miR-589-5p and induced oxidative stress and dysfunction in hRMECs. Taken together, knockdown of circ-UBAP2 relieved HG-induced oxidative stress and dysfunctions of the hRMECs through the miR-589-5p/EGR1 axis, which may offer a promising therapeutic target for DR.
Collapse
Affiliation(s)
- Yu Jiewei
- Ophthalmology Department, Jiujiang Hospital of Traditional Chinese Medicine, Jiujiang City, Jiangxi Province, China
| | - Zhou Jingjing
- Ophthalmology Department, Jiujiang Hospital of Traditional Chinese Medicine, Jiujiang City, Jiangxi Province, China
| | - Xue Jingjing
- Ophthalmology Department, Affiliated Hospital of Jiangxi University of Traditional Chinese, Nanchang City, Jiangxi Province, China
| | - Zhang Guilan
- Ophthalmology Department, The Third Clinical Medical College of China Three Gorges University, Gezhouba central hospital of sinopharm, Yichang City, Hubei Province, China
| |
Collapse
|
23
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
24
|
Lakshminarayanan V, Kheradfallah H, Sarkar A, Jothi Balaji J. Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey. J Imaging 2021; 7:165. [PMID: 34460801 PMCID: PMC8468161 DOI: 10.3390/jimaging7090165] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/23/2021] [Accepted: 08/24/2021] [Indexed: 12/16/2022] Open
Abstract
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fundus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016-2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using both the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) and Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114 published articles which conformed to the scope of the review. In addition, a list of 43 major datasets is presented.
Collapse
Affiliation(s)
- Vasudevan Lakshminarayanan
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Hoda Kheradfallah
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Arya Sarkar
- Department of Computer Engineering, University of Engineering and Management, Kolkata 700 156, India;
| | | |
Collapse
|
25
|
Wu JH, Liu TYA, Hsu WT, Ho JHC, Lee CC. Performance and Limitation of Machine Learning Algorithms for Diabetic Retinopathy Screening: Meta-analysis. J Med Internet Res 2021; 23:e23863. [PMID: 34407500 PMCID: PMC8406115 DOI: 10.2196/23863] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 11/19/2020] [Accepted: 04/30/2021] [Indexed: 12/23/2022] Open
Abstract
Background Diabetic retinopathy (DR), whose standard diagnosis is performed by human experts, has high prevalence and requires a more efficient screening method. Although machine learning (ML)–based automated DR diagnosis has gained attention due to recent approval of IDx-DR, performance of this tool has not been examined systematically, and the best ML technique for use in a real-world setting has not been discussed. Objective The aim of this study was to systematically examine the overall diagnostic accuracy of ML in diagnosing DR of different categories based on color fundus photographs and to determine the state-of-the-art ML approach. Methods Published studies in PubMed and EMBASE were searched from inception to June 2020. Studies were screened for relevant outcomes, publication types, and data sufficiency, and a total of 60 out of 2128 (2.82%) studies were retrieved after study selection. Extraction of data was performed by 2 authors according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), and the quality assessment was performed according to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). Meta-analysis of diagnostic accuracy was pooled using a bivariate random effects model. The main outcomes included diagnostic accuracy, sensitivity, and specificity of ML in diagnosing DR based on color fundus photographs, as well as the performances of different major types of ML algorithms. Results The primary meta-analysis included 60 color fundus photograph studies (445,175 interpretations). Overall, ML demonstrated high accuracy in diagnosing DR of various categories, with a pooled area under the receiver operating characteristic (AUROC) ranging from 0.97 (95% CI 0.96-0.99) to 0.99 (95% CI 0.98-1.00). The performance of ML in detecting more-than-mild DR was robust (sensitivity 0.95; AUROC 0.97), and by subgroup analyses, we observed that robust performance of ML was not limited to benchmark data sets (sensitivity 0.92; AUROC 0.96) but could be generalized to images collected in clinical practice (sensitivity 0.97; AUROC 0.97). Neural network was the most widely used method, and the subgroup analysis revealed a pooled AUROC of 0.98 (95% CI 0.96-0.99) for studies that used neural networks to diagnose more-than-mild DR. Conclusions This meta-analysis demonstrated high diagnostic accuracy of ML algorithms in detecting DR on color fundus photographs, suggesting that state-of-the-art, ML-based DR screening algorithms are likely ready for clinical applications. However, a significant portion of the earlier published studies had methodology flaws, such as the lack of external validation and presence of spectrum bias. The results of these studies should be interpreted with caution.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| | - T Y Alvin Liu
- Retina Division, Wilmer Eye Institute, The Johns Hopkins Medicine, Baltimore, MD, United States
| | - Wan-Ting Hsu
- Harvard TH Chan School of Public Health, Boston, MA, United States
| | | | - Chien-Chang Lee
- Health Data Science Research Group, National Taiwan University Hospital, Taipei, Taiwan.,The Centre for Intelligent Healthcare, National Taiwan University Hospital, Taipei, Taiwan.,Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
26
|
Devignetting fundus images via Bayesian estimation of illumination component and gamma correction. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.06.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
27
|
Wang Y, Yu M, Hu B, Jin X, Li Y, Zhang X, Zhang Y, Gong D, Wu C, Zhang B, Yang J, Li B, Yuan M, Mo B, Wei Q, Zhao J, Ding D, Yang J, Li X, Yu W, Chen Y. Deep learning-based detection and stage grading for optimising diagnosis of diabetic retinopathy. Diabetes Metab Res Rev 2021; 37:e3445. [PMID: 33713564 DOI: 10.1002/dmrr.3445] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 11/07/2022]
Abstract
AIMS To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning-based lesion detection and stage grading. MATERIALS AND METHODS A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as an external test set. For automated referable DR identification, four deep learning models were programmed based on whether two factors were included: DR-related lesions and DR stages. Sensitivity, specificity and the area under the receiver operating characteristic curve (AUC) were reported for referable DR identification, while precision and recall were reported for lesion detection. RESULTS Adding lesion information to the five-stage grading model improved the AUC (0.943 vs. 0.938), sensitivity (90.6% vs. 90.5%) and specificity (80.7% vs. 78.5%) of the model for identifying referable DR in the internal test set. Adding stage information to the lesion-based model increased the AUC (0.943 vs. 0.936) and sensitivity (90.6% vs. 76.7%) of the model for identifying referable DR in the internal test set. Similar trends were also seen in the external test set. DR lesion types with high precision results were preretinal haemorrhage, hard exudate, vitreous haemorrhage, neovascularisation, cotton wool spots and fibrous proliferation. CONCLUSIONS The herein described automated model employed DR lesions and stage information to identify referable DR and displayed better diagnostic value than models built without this information.
Collapse
Affiliation(s)
- Yuelin Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Miao Yu
- Department of Endocrinology, Key Laboratory of Endocrinology, National Health Commission, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Bojie Hu
- Department of Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Xuemin Jin
- Department of Ophthalmology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yibin Li
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xiao Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Yongpeng Zhang
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Di Gong
- Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bilei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Mingzhen Yuan
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bin Mo
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qijie Wei
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Jianchun Zhao
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Dayong Ding
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Jingyun Yang
- Department of Neurological Sciences, Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, Illinois, USA
| | - Xirong Li
- Key Lab of Data Engineering and Knowledge Engineering, Renmin University of China, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
28
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 333] [Impact Index Per Article: 83.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, UK
| | | | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | | | - Dominic King
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
29
|
Jiang Y, Pan J, Yuan M, Shen Y, Zhu J, Wang Y, Li Y, Zhang K, Yu Q, Xie H, Li H, Wang X, Luo Y. Segmentation of Laser Marks of Diabetic Retinopathy in the Fundus Photographs Using Lightweight U-Net. J Diabetes Res 2021; 2021:8766517. [PMID: 34712739 PMCID: PMC8548126 DOI: 10.1155/2021/8766517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 09/03/2021] [Accepted: 09/24/2021] [Indexed: 11/17/2022] Open
Abstract
Diabetic retinopathy (DR) is a prevalent vision-threatening disease worldwide. Laser marks are the scars left after panretinal photocoagulation, a treatment to prevent patients with severe DR from losing vision. In this study, we develop a deep learning algorithm based on the lightweight U-Net to segment laser marks from the color fundus photos, which could help indicate a stage or providing valuable auxiliary information for the care of DR patients. We prepared our training and testing data, manually annotated by trained and experienced graders from Image Reading Center, Zhongshan Ophthalmic Center, publicly available to fill the vacancy of public image datasets dedicated to the segmentation of laser marks. The lightweight U-Net, along with two postprocessing procedures, achieved an AUC of 0.9824, an optimal sensitivity of 94.16%, and an optimal specificity of 92.82% on the segmentation of laser marks in fundus photographs. With accurate segmentation and high numeric metrics, the lightweight U-Net method showed its reliable performance in automatically segmenting laser marks in fundus photographs, which could help the AI assist the diagnosis of DR in the severe stage.
Collapse
Affiliation(s)
- Yukang Jiang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jianying Pan
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Ming Yuan
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yanhe Shen
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jin Zhu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yishen Wang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Yewei Li
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Ke Zhang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Qingyun Yu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Huirui Xie
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Huiting Li
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Xueqin Wang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
- Xinhua College, Sun Yat-Sen University, Guangzhou 510520, China
| | - Yan Luo
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| |
Collapse
|
30
|
Morid MA, Borjali A, Del Fiol G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 2021; 128:104115. [PMID: 33227578 DOI: 10.1016/j.compbiomed.2020.104115] [Citation(s) in RCA: 169] [Impact Index Per Article: 42.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 10/19/2020] [Accepted: 11/09/2020] [Indexed: 02/06/2023]
Abstract
OBJECTIVE Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in terms of the problem description, input, methodology, and outcome. MATERIALS AND METHODS To identify relevant studies, MEDLINE, IEEE, and ACM digital library were searched for studies published between June 1st, 2012 and January 2nd, 2020. Two investigators independently reviewed articles to determine eligibility and to extract data according to a study protocol defined a priori. RESULTS After screening of 8421 articles, 102 met the inclusion criteria. Of 22 anatomical areas, eye (18%), breast (14%), and brain (12%) were the most commonly studied. Data augmentation was performed in 72% of fine-tuning TL studies versus 15% of the feature-extracting TL studies. Inception models were the most commonly used in breast related studies (50%), while VGGNet was the common in eye (44%), skin (50%) and tooth (57%) studies. AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models. Inception models were the most frequently used for studies that analyzed ultrasound (55%), endoscopy (57%), and skeletal system X-rays (57%). VGGNet was the most common for fundus (42%) and optical coherence tomography images (50%). AlexNet was the most frequent model for brain MRIs (36%) and breast X-Rays (50%). 35% of the studies compared their model with other well-trained CNN models and 33% of them provided visualization for interpretation. DISCUSSION This study identified the most prevalent tracks of implementation in the literature for data preparation, methodology selection and output evaluation for various medical image analysis tasks. Also, we identified several critical research gaps existing in the TL studies on medical image analysis. The findings of this scoping review can be used in future TL studies to guide the selection of appropriate research approaches, as well as identify research gaps and opportunities for innovation.
Collapse
Affiliation(s)
- Mohammad Amin Morid
- Department of Information Systems and Analytics, Leavey School of Business, Santa Clara University, Santa Clara, CA, USA.
| | - Alireza Borjali
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA; Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
| | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
31
|
Convolutional Neural Networks with Transfer Learning for Recognition of COVID-19: A Comparative Study of Different Approaches. AI 2020. [DOI: 10.3390/ai1040034] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
To judge the ability of convolutional neural networks (CNNs) to effectively and efficiently transfer image representations learned on the ImageNet dataset to the task of recognizing COVID-19 in this work, we propose and analyze four approaches. For this purpose, we use VGG16, ResNetV2, InceptionResNetV2, DenseNet121, and MobileNetV2 CNN models pre-trained on ImageNet dataset to extract features from X-ray images of COVID and Non-COVID patients. Simulations study performed by us reveal that these pre-trained models have a different level of ability to transfer image representation. We find that in the approaches that we have proposed, if we use either ResNetV2 or DenseNet121 to extract features, then the performance of these approaches to detect COVID-19 is better. One of the important findings of our study is that the use of principal component analysis for feature selection improves efficiency. The approach using the fusion of features outperforms all the other approaches, and with this approach, we could achieve an accuracy of 0.94 for a three-class classification problem. This work will not only be useful for COVID-19 detection but also for any domain with small datasets.
Collapse
|
32
|
Li F, Shi JX, Yan L, Wang YG, Zhang XD, Jiang MS, Wu ZZ, Zhou KQ. Lesion-aware convolutional neural network for chest radiograph classification. Clin Radiol 2020; 76:155.e1-155.e14. [PMID: 33077154 DOI: 10.1016/j.crad.2020.08.027] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 08/18/2020] [Indexed: 01/18/2023]
Abstract
AIM To investigate the performance of a deep-learning approach termed lesion-aware convolutional neural network (LACNN) to identify 14 different thoracic diseases on chest X-rays (CXRs). MATERIALS AND METHODS In total, 10,738 CXRs of 3,526 patients were collected retrospectively. Of these, 1,937 CXRs of 598 patients were selected for training and optimising the lesion-detection network (LDN) of LACNN. The remaining 8,801 CXRs from 2,928 patients were used to train and test the classification network of LACNN. The discriminative performance of the deep-learning approach was compared with that obtained by the radiologists. In addition, its generalisation was validated on the independent public dataset, ChestX-ray14. The decision-making process of the model was visualised by occlusion testing, and the effect of the integration of CXRs and non-image data on model performance was also investigated. In a systematic evaluation, F1 score, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) metrics were calculated. RESULTS The model generated statistically significantly higher AUC performance compared with radiologists on atelectasis, mass, and nodule, with AUC values of 0.831 (95% confidence interval [CI]: 0.807-0.855), 0.959 (95% CI: 0.944-0.974), and 0.928 (95% CI: 0.906-0.950), respectively. For the other 11 pathologies, there were no statistically significant differences. The average time to complete each CXR classification in the testing dataset was substantially longer for the radiologists (∼35 seconds) than for the LACNN (∼0.197 seconds). In the ChestX-ray14 dataset, the present model also showed competitive performance in comparison with other state-of-the-art deep-learning approaches. Model performance was slightly improved when introducing non-image data. CONCLUSION The proposed LACNN achieved radiologist-level performance in identifying thoracic diseases on CXRs, and could potentially expand patient access to CXR diagnostics.
Collapse
Affiliation(s)
- F Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - J-X Shi
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - L Yan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Y-G Wang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - X-D Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - M-S Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China.
| | - Z-Z Wu
- Department of Precision Mechanical Engineering, Shanghai University, Shanghai, China
| | - K-Q Zhou
- Liver Cancer Institute, Zhongshan Hospital, Shanghai, China
| |
Collapse
|
33
|
Hao Z, Cui S, Zhu Y, Shao H, Huang X, Jiang X, Xu R, Chang B, Li H. Application of non-mydriatic fundus examination and artificial intelligence to promote the screening of diabetic retinopathy in the endocrine clinic: an observational study of T2DM patients in Tianjin, China. Ther Adv Chronic Dis 2020; 11:2040622320942415. [PMID: 32973990 PMCID: PMC7491217 DOI: 10.1177/2040622320942415] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Accepted: 06/19/2020] [Indexed: 01/19/2023] Open
Abstract
Background We aimed to determine the role of non-mydriatic fundus examination and artificial intelligence (AI) in screening diabetic retinopathy (DR) in patients with diabetes in the Metabolic Disease Management Center (MMC) in Tianjin, China. Methods Adult patients with type 2 diabetes mellitus who were first treated by MMC in Tianjin First Central Hospital and Tianjin 4th Center Hospital were divided into two groups according to the time that MMC was equipped with the non-mydriatic ophthalmoscope and AI system and could complete fundus examination independently (the former was the control group, the latter was the observation group). The observation indices were as follows: the incidence of DR, the fundus screening rate of the two groups, and fundus screening of diabetic patients with different course of disease. Results A total of 5039 patients were enrolled in this study. The incidence rate of DR was 18.6%, 29.8%, and 49.6% in patients with diabetes duration of ⩽1 year, 1-5 years, and >5 years, respectively. The screening rate of fundus in the observation group was significantly higher compared with the control group (81.3% versus 28.4%, χ 2 = 1430.918, p < 0.001). The DR screening rate of the observation group was also significantly higher compared with the control group in patients with diabetes duration of ⩽1 year (77.3% versus 20.6%; χ 2 = 797.534, p < 0.001), 1-5 years (82.5% versus 31.0%; χ 2 = 197.124, p < 0.001) and ⩾5 years (86.9% versus 37.1%; χ2 = 475.609, p < 0.001). Conclusions In the case of limited medical resources, MMC can carry out one-stop examination, treatment, and management of DR through non-mydratic fundus examination and AI assistance, thus incorporating the DR screening process into the endocrine clinic, so as to facilitate early diagnosis.
Collapse
Affiliation(s)
- Zhaohu Hao
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin, China
| | - Shanshan Cui
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin, China
| | - Yanjuan Zhu
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin, China
| | - Hailin Shao
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, The 4th Central Hospital Affiliated to Nankai University, The 4th Center Clinical College of Tianjin Medical University, Tianjin, China
| | - Xiao Huang
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin, China
| | - Xia Jiang
- Department of Endocrinology, Tianjin First Central Hospital, The First Center Clinical College of Tianjin Medical University, Tianjin, China
| | - Rong Xu
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, The 4th Central Hospital Affiliated to Nankai University, The 4th Center Clinical College of Tianjin Medical University, Tianjin, China
| | - Baocheng Chang
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin 300134, China
| | - Huanming Li
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, The 4th Central Hospital Affiliated to Nankai University, The 4th Center Clinical College of Tianjin Medical University, No. 1 Zhongshan Road, Tianjin 300140, China
| |
Collapse
|
34
|
Bibi I, Mir J, Raja G. Automated detection of diabetic retinopathy in fundus images using fused features. Phys Eng Sci Med 2020; 43:1253-1264. [PMID: 32955686 DOI: 10.1007/s13246-020-00929-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 09/14/2020] [Indexed: 10/23/2022]
Abstract
Diabetic retinopathy (DR) is one of the severe eye conditions due to diabetes complication which can lead to vision loss if left untreated. In this paper, a computationally simple, yet very effective, DR detection method is proposed. First, a segmentation independent two-stage preprocessing based technique is proposed which can effectively extract DR pathognomonic signs; both bright and red lesions, and blood vessels from the eye fundus image. Then, the performance of Local Binary Patterns (LBP), Local Ternary Patterns (LTP), Dense Scale-Invariant Feature Transform (DSIFT) and Histogram of Oriented Gradients (HOG) as a feature descriptor for fundus images, is thoroughly analyzed. SVM kernel-based classifiers are trained and tested, using a 5-fold cross-validation scheme, on both newly acquired fundus image database from the local hospital and combined database created from the open-sourced available databases. The classification accuracy of 96.6% with 0.964 sensitivity and 0.969 specificity is achieved using a Cubic SVM classifier with LBP and LTP fused features for the local database. More importantly, in out-of-sample testing on the combined database, the model gives an accuracy of 95.21% with a sensitivity of 0.970 and specificity of 0.932. This indicates the proposed model is very well-fitted and generalized which is further corroborated by the presented train-test curves.
Collapse
Affiliation(s)
- Iqra Bibi
- Electrical Engineering Department, University of Engineering and Technology, Taxila, Pakistan
| | - Junaid Mir
- Electrical Engineering Department, University of Engineering and Technology, Taxila, Pakistan
| | - Gulistan Raja
- Electrical Engineering Department, University of Engineering and Technology, Taxila, Pakistan.
| |
Collapse
|
35
|
Tseng VS, Chen CL, Liang CM, Tai MC, Liu JT, Wu PY, Deng MS, Lee YW, Huang TY, Chen YH. Leveraging Multimodal Deep Learning Architecture with Retina Lesion Information to Detect Diabetic Retinopathy. Transl Vis Sci Technol 2020; 9:41. [PMID: 32855845 PMCID: PMC7424907 DOI: 10.1167/tvst.9.2.41] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 05/28/2020] [Indexed: 01/27/2023] Open
Abstract
Purpose To improve disease severity classification from fundus images using a hybrid architecture with symptom awareness for diabetic retinopathy (DR). Methods We used 26,699 fundus images of 17,834 diabetic patients from three Taiwanese hospitals collected in 2007 to 2018 for DR severity classification. Thirty-seven ophthalmologists verified the images using lesion annotation and severity classification as the ground truth. Two deep learning fusion architectures were proposed: late fusion, which combines lesion and severity classification models in parallel using a postprocessing procedure, and two-stage early fusion, which combines lesion detection and classification models sequentially and mimics the decision-making process of ophthalmologists. Messidor-2 was used with 1748 images to evaluate and benchmark the performance of the architecture. The primary evaluation metrics were classification accuracy, weighted κ statistic, and area under the receiver operating characteristic curve (AUC). Results For hospital data, a hybrid architecture achieved a good detection rate, with accuracy and weighted κ of 84.29% and 84.01%, respectively, for five-class DR grading. It also classified the images of early stage DR more accurately than conventional algorithms. The Messidor-2 model achieved an AUC of 97.09% in referral DR detection compared to AUC of 85% to 99% for state-of-the-art algorithms that learned from a larger database. Conclusions Our hybrid architectures strengthened and extracted characteristics from DR images, while improving the performance of DR grading, thereby increasing the robustness and confidence of the architectures for general use. Translational Relevance The proposed fusion architectures can enable faster and more accurate diagnosis of various DR pathologies than that obtained in current manual clinical practice.
Collapse
Affiliation(s)
- Vincent S Tseng
- Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan.,Institute of Data Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan
| | - Ching-Long Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Chang-Min Liang
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Ming-Cheng Tai
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jung-Tzu Liu
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Po-Yi Wu
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ming-Shan Deng
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ya-Wen Lee
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Teng-Yi Huang
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yi-Hao Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
36
|
Cano J, O’neill WD, Penn RD, Blair NP, Kashani AH, Ameri H, Kaloostian CL, Shahidi M. Classification of advanced and early stages of diabetic retinopathy from non-diabetic subjects by an ordinary least squares modeling method applied to OCTA images. BIOMEDICAL OPTICS EXPRESS 2020; 11:4666-4678. [PMID: 32923070 PMCID: PMC7449717 DOI: 10.1364/boe.394472] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 07/04/2020] [Accepted: 07/12/2020] [Indexed: 05/02/2023]
Abstract
As the prevalence of diabetic retinopathy (DR) continues to rise, there is a need to develop computer-aided screening methods. The current study reports and validates an ordinary least squares (OLS) method to model optical coherence tomography angiography (OCTA) images and derive OLS parameters for classifying proliferative DR (PDR) and no/mild non-proliferative DR (NPDR) from non-diabetic subjects. OLS parameters were correlated with vessel metrics quantified from OCTA images and were used to determine predicted probabilities of PDR, no/mild NPDR, and non-diabetics. The classification rates of PDR and no/mild NPDR from non-diabetic subjects were 94% and 91%, respectively. The method had excellent predictive ability and was validated. With further development, the method may have potential clinical utility and contribute to image-based computer-aided screening and classification of stages of DR and other ocular and systemic diseases.
Collapse
Affiliation(s)
- Jennifer Cano
- Department of Ophthalmology, University of Southern California, Los Angeles, CA 90007, USA
| | - William D. O’neill
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Richard D. Penn
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Neurosurgery, Rush University and Hospital, Chicago, IL 60612, USA
| | - Norman P. Blair
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Amir H. Kashani
- Department of Ophthalmology, University of Southern California, Los Angeles, CA 90007, USA
| | - Hossein Ameri
- Department of Ophthalmology, University of Southern California, Los Angeles, CA 90007, USA
| | - Carolyn L. Kaloostian
- Department of Family Medicine, University of Southern California, Los Angeles, CA 90007, USA
| | - Mahnaz Shahidi
- Department of Ophthalmology, University of Southern California, Los Angeles, CA 90007, USA
| |
Collapse
|
37
|
Le D, Alam M, Yao CK, Lim JI, Hsieh YT, Chan RVP, Toslak D, Yao X. Transfer Learning for Automated OCTA Detection of Diabetic Retinopathy. Transl Vis Sci Technol 2020; 9:35. [PMID: 32855839 PMCID: PMC7424949 DOI: 10.1167/tvst.9.2.35] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 04/05/2020] [Indexed: 01/10/2023] Open
Abstract
Purpose To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy. Methods A deep-learning convolutional neural network (CNN) architecture, VGG16, was employed for this study. A transfer learning process was implemented to retrain the CNN for robust OCTA classification. One dataset, consisting of images of 32 healthy eyes, 75 eyes with diabetic retinopathy (DR), and 24 eyes with diabetes but no DR (NoDR), was used for training and cross-validation. A second dataset consisting of 20 NoDR and 26 DR eyes was used for external validation. To demonstrate the feasibility of using artificial intelligence (AI) screening of DR in clinical environments, the CNN was incorporated into a graphical user interface (GUI) platform. Results With the last nine layers retrained, the CNN architecture achieved the best performance for automated OCTA classification. The cross-validation accuracy of the retrained classifier for differentiating among healthy, NoDR, and DR eyes was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR, and DR eyes were 0.97, 0.98, and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment. Conclusions With a transfer learning process for retraining, a CNN can be used for robust OCTA classification of healthy, NoDR, and DR eyes. The AI-based OCTA classification platform may provide a practical solution to reducing the burden of experienced ophthalmologists with regard to mass screening of DR patients. Translational Relevance Deep-learning-based OCTA classification can alleviate the need for manual graders and improve DR screening efficiency.
Collapse
Affiliation(s)
- David Le
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Minhaj Alam
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Cham K Yao
- Hinsdale Central High School, Hinsdale, IL, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University, Taipei, Taiwan
| | - Robison V P Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Devrim Toslak
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA.,Department of Ophthalmology, Antalya Training and Research Hospital, Antalya, Turkey
| | - Xincheng Yao
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA.,Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
38
|
Hacisoftaoglu RE, Karakaya M, Sallam AB. Deep Learning Frameworks for Diabetic Retinopathy Detection with Smartphone-based Retinal Imaging Systems. Pattern Recognit Lett 2020; 135:409-417. [PMID: 32704196 DOI: 10.1016/j.patrec.2020.04.009] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Diabetic Retinopathy (DR) may result in various degrees of vision loss and even blindness if not diagnosed in a timely manner. Therefore, having an annual eye exam helps early detection to prevent vision loss in earlier stages, especially for diabetic patients. Recent technological advances made smartphone-based retinal imaging systems available on the market to perform small-sized, low-powered, and affordable DR screening in diverse environments. However, the accuracy of DR detection depends on the field of view and image quality. Since smartphone-based retinal imaging systems have much more compact designs than a traditional fundus camera, captured images are likely to be the low quality with a smaller field of view. Our motivation in this paper is to develop an automatic DR detection model for smartphone-based retinal images using the deep learning approach with the ResNet50 network. This study first utilized the well-known AlexNet, GoogLeNet, and ResNet50 architectures, using the transfer learning approach. Second, these frameworks were retrained with retina images from several datasets including EyePACS, Messidor, IDRiD, and Messidor-2 to investigate the effect of using images from the single, cross, and multiple datasets. Third, the proposed ResNet50 model is applied to smartphone-based synthetic images to explore the DR detection accuracy of smartphone-based retinal imaging systems. Based on the vision-threatening diabetic retinopathy detection results, the proposed approach achieved a high classification accuracy of 98.6%, with a 98.2% sensitivity and a 99.1% specificity while its AUC was 0.9978 on the independent test dataset. As the main contributions, DR detection accuracy was improved using the deep transfer learning approach for the ResNet50 network with publicly available datasets and the effect of the field of view in smartphone-based retinal imaging was studied. Although a smaller number of images were used in the training set compared with the existing studies, considerably acceptable high accuracies for validation and testing data were obtained.
Collapse
Affiliation(s)
| | - Mahmut Karakaya
- Dept. of Computer Science, University of Central Arkansas, Conway, AR, 72035, USA
| | - Ahmed B Sallam
- Jones Eye Institute, University of Arkansas for Medical Sciences, Little Rock, AR 72205, USA
| |
Collapse
|