1
|
Yang C, Zhou C. Observation on the changes of visual field and optic nerve fiber layer thickness in patients with early diabetic retinopathy. Photodiagnosis Photodyn Ther 2024; 47:104197. [PMID: 38723758 DOI: 10.1016/j.pdpdt.2024.104197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 05/03/2024] [Accepted: 05/06/2024] [Indexed: 05/27/2024]
Abstract
BACKGROUND Diabetic retinopathy (DR) is a common complication of diabetes mellitus (DM) and is a leading cause of vision loss. Early detection of DR-related neurodegenerative changes is crucial for effective management and prevention of vision loss in diabetic patients. METHODS In this study, we employed spectral-domain polarization-sensitive optical coherence tomography (SD PS-OCT) to assess retinal nerve fiber layer (RNFL) changes in 120 eyes from 60 types 1 DM patients without clinical DR and 60 age-matched healthy controls. Visual field testing was performed to evaluate mean sensitivity (MS) and mean defect (MD) as indicators of visual function. RESULTS SD PS-OCT measurements revealed significant reductions in RNFL birefringence, retardation, and thickness in type 1 DM patients compared to healthy controls. Visual field testing showed decreased MS and increased MD in DM patients, indicating functional impairment correlated with RNFL alterations. CONCLUSION Our findings demonstrate early neurodegenerative changes in the RNFL of type 1 DM patients without clinical DR, highlighting the potential of SD PS-OCT as a sensitive tool for early detection of subclinical DR-related neurodegeneration. These results underscore the importance of regular ophthalmic screenings in diabetic patients to enable timely intervention and prevent vision-threatening complications.
Collapse
Affiliation(s)
- Chen Yang
- In Eye Hospital, Chengdu University of Traditional Chinese Medicine, Chengdu 610000, China
| | - Chunyang Zhou
- In Eye Hospital, Chengdu University of Traditional Chinese Medicine, Chengdu 610000, China.
| |
Collapse
|
2
|
Parmar UPS, Surico PL, Singh RB, Romano F, Salati C, Spadea L, Musa M, Gagliano C, Mori T, Zeppieri M. Artificial Intelligence (AI) for Early Diagnosis of Retinal Diseases. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:527. [PMID: 38674173 PMCID: PMC11052176 DOI: 10.3390/medicina60040527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/12/2024] [Accepted: 03/21/2024] [Indexed: 04/28/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative tool in the field of ophthalmology, revolutionizing disease diagnosis and management. This paper provides a comprehensive overview of AI applications in various retinal diseases, highlighting its potential to enhance screening efficiency, facilitate early diagnosis, and improve patient outcomes. Herein, we elucidate the fundamental concepts of AI, including machine learning (ML) and deep learning (DL), and their application in ophthalmology, underscoring the significance of AI-driven solutions in addressing the complexity and variability of retinal diseases. Furthermore, we delve into the specific applications of AI in retinal diseases such as diabetic retinopathy (DR), age-related macular degeneration (AMD), Macular Neovascularization, retinopathy of prematurity (ROP), retinal vein occlusion (RVO), hypertensive retinopathy (HR), Retinitis Pigmentosa, Stargardt disease, best vitelliform macular dystrophy, and sickle cell retinopathy. We focus on the current landscape of AI technologies, including various AI models, their performance metrics, and clinical implications. Furthermore, we aim to address challenges and pitfalls associated with the integration of AI in clinical practice, including the "black box phenomenon", biases in data representation, and limitations in comprehensive patient assessment. In conclusion, this review emphasizes the collaborative role of AI alongside healthcare professionals, advocating for a synergistic approach to healthcare delivery. It highlights the importance of leveraging AI to augment, rather than replace, human expertise, thereby maximizing its potential to revolutionize healthcare delivery, mitigate healthcare disparities, and improve patient outcomes in the evolving landscape of medicine.
Collapse
Affiliation(s)
| | - Pier Luigi Surico
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
| | - Rohan Bir Singh
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Francesco Romano
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Carlo Salati
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Leopoldo Spadea
- Eye Clinic, Policlinico Umberto I, “Sapienza” University of Rome, 00142 Rome, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Caterina Gagliano
- Faculty of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Tommaso Mori
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
- Department of Ophthalmology, University of California San Diego, La Jolla, CA 92122, USA
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| |
Collapse
|
3
|
Yuan H, Dai M, Shi C, Li M, Li H. A generative adversarial neural network with multi-attention feature extraction for fundus lesion segmentation. Int Ophthalmol 2023; 43:5079-5090. [PMID: 37851139 DOI: 10.1007/s10792-023-02911-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 09/29/2023] [Indexed: 10/19/2023]
Abstract
PURPOSE Fundus lesion segmentation determines the location and size of diabetes retinopathy in fundus image, which assists doctors in developing the best eye treatment plan. However, owing to the scattered distribution and the similarity of lesions, it is extremely difficult to extract representative lesions feature and accurately segment lesions area. METHODS To solve the thorny problem, a generative adversarial network with multi-attention feature extraction is developed to segment diabetic retinopathy region. The main contributions are as follows: (1) An improved residual U-Net network combining with self-attention mechanism is designed as generative network to fully extract local and global feature of lesions while reducing the loss of key feature information. Considering the correlation between the same lesions feature of different samples, external attention mechanism is introduced in the residual U-Net network to focus on the relevant features of the same lesions in different samples throughout the entire dataset. (2) A discriminative network based on the PatchGAN structure is designed to further enhance the segmentation ability of generation network by discriminating between true and false samples. RESULTS The proposed network is evaluated on the public dataset IDRiD, which achieved the Dice correlation coefficients of 75.7%, 76.53%, 50.06%, and 45.89% for EX, SE, MA, and HE, respectively. CONCLUSION The experimental results show the generative adversarial neural network qualified for accurate segmentation of diabetic retinopathy from fundus image well.
Collapse
Affiliation(s)
- Haiying Yuan
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China.
| | - Mengfan Dai
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China
| | - Cheng Shi
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China
| | - Minghao Li
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China
| | - Haihang Li
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China
| |
Collapse
|
4
|
Onikanni SA, Lawal B, Munyembaraga V, Bakare OS, Taher M, Khotib J, Susanti D, Oyinloye BE, Noriega L, Famuti A, Fadaka AO, Ajiboye BO. Profiling the Antidiabetic Potential of Compounds Identified from Fractionated Extracts of Entada africana toward Glucokinase Stimulation: Computational Insight. Molecules 2023; 28:5752. [PMID: 37570723 PMCID: PMC10420681 DOI: 10.3390/molecules28155752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/05/2023] [Accepted: 07/21/2023] [Indexed: 08/13/2023] Open
Abstract
Glucokinase plays an important role in regulating the blood glucose level and serves as an essential therapeutic target in type 2 diabetes management. Entada africana is a medicinal plant and highly rich source of bioactive ligands with the potency to develop new target drugs for glucokinase such as diabetes and obesity. Therefore, the study explored a computational approach to predict identified compounds from Entada africana following its intermolecular interactions with the allosteric binding site of the enzymes. We retrieved the three-dimensional (3D) crystal structure of glucokinase (PDB ID: 4L3Q) from the online protein data bank and prepared it using the Maestro 13.5, Schrödinger Suite 2022-3. The compounds identified were subjected to ADME, docking analysis, pharmacophore modeling, and molecular simulation. The results show the binding potential of the identified ligands to the amino acid residues, thereby suggesting an interaction of the amino acids with the ligand at the binding site of the glucokinase activator through conventional chemical bonds such as hydrogen bonds and hydrophobic interactions. The compatibility of the molecules was highly observed when compared with the standard ligand, thereby leading to structural and functional changes. Therefore, the bioactive components from Entada africana could be a good driver of glucokinase, thereby paving the way for the discovery of therapeutic drugs for the treatment of diabetes and its related complications.
Collapse
Affiliation(s)
- Sunday Amos Onikanni
- College of Medicine, Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan;
- Department of Chemical Sciences, Biochemistry Unit, Afe-Babalola University, Ado-Ekiti 360101, Ekiti State, Nigeria;
| | - Bashir Lawal
- Department of Pathology, University of Pittsburgh, Pittsburgh, PA 15213, USA;
| | - Valens Munyembaraga
- Institute of Translational Medicine and New Drug Development, College of Medicine, China Medical University, Taichung 40402, Taiwan;
- University Teaching Hospital of Butare, Huye 15232, Rwanda
| | - Oluwafemi Shittu Bakare
- Department of Biochemistry, Faculty Science, Adekunle Ajasin University, Akungba Akoko 342111, Ondo State, Nigeria;
| | - Muhammad Taher
- Department of Pharmaceutical Technology, Kulliyyah of Pharmacy, International Islamic University Malaysia, Kuantan 25200, Pahang, Malaysia;
- Pharmaceutics and Translational Research Group, Kulliyyah of Pharmacy, International Islamic University Malaysia, Kuantan 25200, Pahang, Malaysia
| | - Junaidi Khotib
- Department of Pharmacy Practice, Faculty of Pharmacy, Airlangga University, Surabaya 60115, Indonesia
| | - Deny Susanti
- Department of Chemistry, Kulliyyah of Science, International Islamic University Malaysia, Kuantan 25200, Pahang, Malaysia;
| | - Babatunji Emmanuel Oyinloye
- Department of Chemical Sciences, Biochemistry Unit, Afe-Babalola University, Ado-Ekiti 360101, Ekiti State, Nigeria;
- Biotechnology and Structural Biology (BSB) Group, Department of Biochemistry and Microbiology, University of Zululand, KwaDlangezwa 3886, South Africa
- Institute of Drug Research and Development, SE Bogoro Center, Afe Babalola University, PMB 5454, Ado-Ekiti 360001, Ekiti State, Nigeria;
| | - Lloyd Noriega
- College of Medicine, Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan;
| | - Ayodeji Famuti
- Honey T Scientific Company, Ibadan 234002, Oyo State, Nigeria;
| | | | - Basiru Olaitan Ajiboye
- Institute of Drug Research and Development, SE Bogoro Center, Afe Babalola University, PMB 5454, Ado-Ekiti 360001, Ekiti State, Nigeria;
- Phytomedicine and Molecular Toxicology Research Laboratory, Department of Biochemistry, Federal University, Oye-Ekiti 371104, Ekiti State, Nigeria
| |
Collapse
|
5
|
Cheng X, Wang H. A generic model-free feature screening procedure for ultra-high dimensional data with categorical response. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107269. [PMID: 36463676 DOI: 10.1016/j.cmpb.2022.107269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 11/22/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Identifying active features from ultra-high dimensional data is one of the primary and vital tasks in statistical learning and biological discovery. METHODS In this paper, we develop a generic concordance index screening (CI-SIS) procedure to wrestle with ultra-high dimensional data with categorical response. The proposed procedure is model-free and nonparametric based on the concordance index measure. It enjoys both sure screening and ranking consistency properties under some relatively weak assumptions. We investigate the flexibility of this procedure by considering some commonly-encountered challenging settings in biomedical studies, such as category-adaptive data and extremely unbalanced response distributions. A data-driven threshold selection procedure via knockoff features is also presented. RESULTS On the real lung dataset, our method achieves a lower prediction error with a mean error of 0.107 with linear discriminant analysis (LDA) and 0.117 with random forest (RF), respectively. In addition, we obtain an accuracy improvement of 3% with LDA and 5% with RF compared to the runner-up method. In a more challenging real data of SRBCT (Small round blue cell tumours), CI-SIS brings about a amazing performance improvement, which is at least 8% higher than all other competing methods. CONCLUSION Experimental results show that the proposed method can efficiently identify genes that are associated with certain types of diseases. Therefore, survived features (filtering out irrelevant features) selected by our procedure can help doctors make precision diagnoses and refined treatments of patients.
Collapse
Affiliation(s)
- Xuewei Cheng
- School of Mathematics and Statistics, Central South University, Changsha, China; Department of Statistics and Data Science, National University of Singapore, Singapore.
| | - Hong Wang
- School of Mathematics and Statistics, Central South University, Changsha, China.
| |
Collapse
|
6
|
Nage P, Shitole S, Kokare M. An intelligent approach for detection and grading of diabetic retinopathy and diabetic macular edema using retinal images. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2022.2164358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Affiliation(s)
- Pranoti Nage
- Computer Science & Technology, Usha Mittal Institute of Technology for Women, S.N.D.T. Women’s University, Mumbai, India
| | - Sanjay Shitole
- Information Technology, Usha Mittal Institute of Technology for Women, S.N.D.T. Women’s University, Mumbai, India
| | - Manesh Kokare
- Centre of Excellence in Signal & Image Processing, Shri Guru Gobind Singhji Institute of Technology, Nanded, India
| |
Collapse
|
7
|
Xiao Y, Hu Y, Quan W, Yang Y, Lai W, Wang X, Zhang X, Zhang B, Wu Y, Wu Q, Liu B, Zeng X, Lin Z, Fang Y, Hu Y, Feng S, Yuan L, Cai H, Li T, Lin H, Yu H. Development and validation of a deep learning system to classify aetiology and predict anatomical outcomes of macular hole. Br J Ophthalmol 2023; 107:109-115. [PMID: 34348922 PMCID: PMC9763201 DOI: 10.1136/bjophthalmol-2021-318844] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 07/23/2021] [Indexed: 11/03/2022]
Abstract
AIMS To develop a deep learning (DL) model for automatic classification of macular hole (MH) aetiology (idiopathic or secondary), and a multimodal deep fusion network (MDFN) model for reliable prediction of MH status (closed or open) at 1 month after vitrectomy and internal limiting membrane peeling (VILMP). METHODS In this multicentre retrospective cohort study, a total of 330 MH eyes with 1082 optical coherence tomography (OCT) images and 3300 clinical data enrolled from four ophthalmic centres were used to train, validate and externally test the DL and MDFN models. 266 eyes from three centres were randomly split by eye-level into a training set (80%) and a validation set (20%). In the external testing dataset, 64 eyes were included from the remaining centre. All eyes underwent macular OCT scanning at baseline and 1 month after VILMP. The area under the receiver operated characteristic curve (AUC), accuracy, specificity and sensitivity were used to evaluate the performance of the models. RESULTS In the external testing set, the AUC, accuracy, specificity and sensitivity of the MH aetiology classification model were 0.965, 0.950, 0.870 and 0.938, respectively; the AUC, accuracy, specificity and sensitivity of the postoperative MH status prediction model were 0.904, 0.825, 0.977 and 0.766, respectively; the AUC, accuracy, specificity and sensitivity of the postoperative idiopathic MH status prediction model were 0.947, 0.875, 0.815 and 0.979, respectively. CONCLUSION Our DL-based models can accurately classify the MH aetiology and predict the MH status after VILMP. These models would help ophthalmologists in diagnosis and surgical planning of MH.
Collapse
Affiliation(s)
- Yu Xiao
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Yijun Hu
- Aier Institute of Refractive Surgery, Refractive Surgery Center, Guangzhou Aier Eye Hospital, Guangzhou, China,Aier School of Ophthalmology, Central South University, Changsha, China
| | - Wuxiu Quan
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Xun Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Bin Zhang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Yuqing Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Qiaowei Wu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Baoyi Liu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Xiaomin Zeng
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Zhanjie Lin
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Ying Fang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Yu Hu
- Department of Opthalmology, the First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Songfu Feng
- Department of Ophthalmology, Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Ling Yuan
- Department of Opthalmology, the First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Hongmin Cai
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Tao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China .,Center of Precision Medicine, Sun Yat-sen University, Guangzhou, China
| | - Honghua Yu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China .,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| |
Collapse
|
8
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
9
|
Li F, Tang S, Chen Y, Zou H. Deep attentive convolutional neural network for automatic grading of imbalanced diabetic retinopathy in retinal fundus images. BIOMEDICAL OPTICS EXPRESS 2022; 13:5813-5835. [PMID: 36733744 PMCID: PMC9872872 DOI: 10.1364/boe.472176] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/25/2022] [Accepted: 10/06/2022] [Indexed: 06/18/2023]
Abstract
Automated fine-grained diabetic retinopathy (DR) grading was of great significance for assisting ophthalmologists in monitoring DR and designing tailored treatments for patients. Nevertheless, it is a challenging task as a result of high intra-class variations, high inter-class similarities, small lesions, and imbalanced data distributions. The pivotal factor for the success in fine-grained DR grading is to discern more subtle associated lesion features, such as microaneurysms (MA), Hemorrhages (HM), soft exudates (SE), and hard exudates (HE). In this paper, we constructed a simple yet effective deep attentive convolutional neural network (DACNN) for DR grading and lesion discovery with only image-wise supervision. Designed as a top-down architecture, our model incorporated stochastic atrous spatial pyramid pooling (sASPP), global attention mechanism (GAM), category attention mechanism (CAM), and learnable connected module (LCM) to better extract lesion-related features and maximize the DR grading performance. To be concrete, we devised sASPP combining randomness with atrous spatial pyramid pooling (ASPP) to accommodate the various scales of the lesions and struggle against the co-adaptation of multiple atrous convolutions. Then, GAM was introduced to extract class-agnostic global attention feature details, whilst CAM was explored for seeking class-specific distinctive region-level lesion feature information and regarding each DR severity grade in an equal way, which tackled the problem of imbalance DR data distributions. Further, the LCM was designed to automatically and adaptively search the optimal connections among layers for better extracting detailed small lesion feature representations. The proposed approach obtained high accuracy of 88.0% and kappa score of 88.6% for multi-class DR grading task on the EyePACS dataset, respectively, while 98.5% AUC, 93.8% accuracy, 87.9% kappa, 90.7% recall, 94.6% precision, and 92.6% F1-score for referral and non-referral classification on the Messidor dataset. Extensive experimental results on three challenging benchmarks demonstrated that the proposed approach achieved competitive performance in DR grading and lesion discovery using retinal fundus images compared with existing cutting-edge methods, and had good generalization capacity for unseen DR datasets. These promising results highlighted its potential as an efficient and reliable tool to assist ophthalmologists in large-scale DR screening.
Collapse
Affiliation(s)
- Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Shiqing Tang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Yuyang Chen
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Haidong Zou
- Shanghai Eye Disease Prevention & Treatment Center, Shanghai 200040, China
- Ophthalmology Center, Shanghai General Hospital, Shanghai 200080, China
| |
Collapse
|
10
|
Dubey S, Dixit M. Recent developments on computer aided systems for diagnosis of diabetic retinopathy: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:14471-14525. [PMID: 36185322 PMCID: PMC9510498 DOI: 10.1007/s11042-022-13841-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/27/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Diabetes is a long-term condition in which the pancreas quits producing insulin or the body's insulin isn't utilised properly. One of the signs of diabetes is Diabetic Retinopathy. Diabetic retinopathy is the most prevalent type of diabetes, if remains unaddressed, diabetic retinopathy can affect all diabetics and become very serious, raising the chances of blindness. It is a chronic systemic condition that affects up to 80% of patients for more than ten years. Many researchers believe that if diabetes individuals are diagnosed early enough, they can be rescued from the condition in 90% of cases. Diabetes damages the capillaries, which are microscopic blood vessels in the retina. On images, blood vessel damage is usually noticeable. Therefore, in this study, several traditional, as well as deep learning-based approaches, are reviewed for the classification and detection of this particular diabetic-based eye disease known as diabetic retinopathy, and also the advantage of one approach over the other is also described. Along with the approaches, the dataset and the evaluation metrics useful for DR detection and classification are also discussed. The main finding of this study is to aware researchers about the different challenges occurs while detecting diabetic retinopathy using computer vision, deep learning techniques. Therefore, a purpose of this review paper is to sum up all the major aspects while detecting DR like lesion identification, classification and segmentation, security attacks on the deep learning models, proper categorization of datasets and evaluation metrics. As deep learning models are quite expensive and more prone to security attacks thus, in future it is advisable to develop a refined, reliable and robust model which overcomes all these aspects which are commonly found while designing deep learning models.
Collapse
Affiliation(s)
- Shradha Dubey
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| | - Manish Dixit
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| |
Collapse
|
11
|
Deep CNN with Hybrid Binary Local Search and Particle Swarm Optimizer for Exudates Classification from Fundus Images. J Digit Imaging 2022; 35:56-67. [PMID: 34997375 PMCID: PMC8854611 DOI: 10.1007/s10278-021-00534-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 10/04/2021] [Accepted: 11/03/2021] [Indexed: 02/03/2023] Open
Abstract
Diabetic retinopathy is a chronic condition that causes vision loss if not detected early. In the early stage, it can be diagnosed with the aid of exudates which are called lesions. However, it is arduous to detect the exudate lesion due to the availability of blood vessels and other distractions. To tackle these issues, we proposed a novel exudates classification from the fundus image known as hybrid convolutional neural network (CNN)-based binary local search optimizer-based particle swarm optimization algorithm. The proposed method from this paper exploits image augmentation to enlarge the fundus image to the required size without losing any features. The features from the resized fundus images are extracted as a feature vector and fed into the feed-forward CNN as the input. Henceforth, it classifies the exudates from the fundus image. Further, the hyperparameters are optimized to reduce the computational complexities by utilization of binary local search optimizer (BLSO) and particle swarm optimization (PSO). The experimental analysis is conducted on the public ROC and real-time ARA400 datasets and compared with the state-of-art works such as support vector machine classifiers, multi-modal/multi-scale, random forest, and CNN for the performance metrics. The classification accuracy is high for the proposed work, and thus, our proposed outperforms all the other approaches.
Collapse
|
12
|
Local Structure Awareness-Based Retinal Microaneurysm Detection with Multi-Feature Combination. Biomedicines 2022; 10:biomedicines10010124. [PMID: 35052803 PMCID: PMC8773350 DOI: 10.3390/biomedicines10010124] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 12/31/2021] [Accepted: 01/03/2022] [Indexed: 01/02/2023] Open
Abstract
Retinal microaneurysm (MA) is the initial symptom of diabetic retinopathy (DR). The automatic detection of MA is helpful to assist doctors in diagnosis and treatment. Previous algorithms focused on the features of the target itself; however, the local structural features of the target and background are also worth exploring. To achieve MA detection, an efficient local structure awareness-based retinal MA detection with the multi-feature combination (LSAMFC) is proposed in this paper. We propose a novel local structure feature called a ring gradient descriptor (RGD) to describe the structural differences between an object and its surrounding area. Then, a combination of RGD with the salience and texture features is used by a Gradient Boosting Decision Tree (GBDT) for candidate classification. We evaluate our algorithm on two public datasets, i.e., the e-ophtha MA dataset and retinopathy online challenge (ROC) dataset. The experimental results show that the performance of the trained model significantly improved after combining traditional features with RGD, and the area under the receiver operating characteristic curve (AUC) values in the test results of the datasets e-ophtha MA and ROC increased from 0.9615 to 0.9751 and from 0.9066 to 0.9409, respectively.
Collapse
|
13
|
Review of Machine Learning Applications Using Retinal Fundus Images. Diagnostics (Basel) 2022; 12:diagnostics12010134. [PMID: 35054301 PMCID: PMC8774893 DOI: 10.3390/diagnostics12010134] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 02/04/2023] Open
Abstract
Automating screening and diagnosis in the medical field saves time and reduces the chances of misdiagnosis while saving on labor and cost for physicians. With the feasibility and development of deep learning methods, machines are now able to interpret complex features in medical data, which leads to rapid advancements in automation. Such efforts have been made in ophthalmology to analyze retinal images and build frameworks based on analysis for the identification of retinopathy and the assessment of its severity. This paper reviews recent state-of-the-art works utilizing the color fundus image taken from one of the imaging modalities used in ophthalmology. Specifically, the deep learning methods of automated screening and diagnosis for diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are investigated. In addition, the machine learning techniques applied to the retinal vasculature extraction from the fundus image are covered. The challenges in developing these systems are also discussed.
Collapse
|
14
|
Tariq H, Rashid M, Javed A, Zafar E, Alotaibi SS, Zia MYI. Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy. SENSORS (BASEL, SWITZERLAND) 2021; 22:205. [PMID: 35009747 PMCID: PMC8749542 DOI: 10.3390/s22010205] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Revised: 12/13/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Diabetic retinopathy (DR) is a human eye disease that affects people who are suffering from diabetes. It causes damage to their eyes, including vision loss. It is treatable; however, it takes a long time to diagnose and may require many eye exams. Early detection of DR may prevent or delay the vision loss. Therefore, a robust, automatic and computer-based diagnosis of DR is essential. Currently, deep neural networks are being utilized in numerous medical areas to diagnose various diseases. Consequently, deep transfer learning is utilized in this article. We employ five convolutional-neural-network-based designs (AlexNet, GoogleNet, Inception V4, Inception ResNet V2 and ResNeXt-50). A collection of DR pictures is created. Subsequently, the created collections are labeled with an appropriate treatment approach. This automates the diagnosis and assists patients through subsequent therapies. Furthermore, in order to identify the severity of DR retina pictures, we use our own dataset to train deep convolutional neural networks (CNNs). Experimental results reveal that the pre-trained model Se-ResNeXt-50 obtains the best classification accuracy of 97.53% for our dataset out of all pre-trained models. Moreover, we perform five different experiments on each CNN architecture. As a result, a minimum accuracy of 84.01% is achieved for a five-degree classification.
Collapse
Affiliation(s)
- Hassan Tariq
- Department of Electrical Engineering, School of Engineering, University of Management and Technology (UMT), Lahore 54770, Pakistan; (H.T.); (A.J.); (E.Z.)
| | - Muhammad Rashid
- Department of Computer Engineering, Umm Al-Qura University, Makkah 21955, Saudi Arabia;
| | - Asfa Javed
- Department of Electrical Engineering, School of Engineering, University of Management and Technology (UMT), Lahore 54770, Pakistan; (H.T.); (A.J.); (E.Z.)
| | - Eeman Zafar
- Department of Electrical Engineering, School of Engineering, University of Management and Technology (UMT), Lahore 54770, Pakistan; (H.T.); (A.J.); (E.Z.)
| | - Saud S. Alotaibi
- Department of Information Systems, Umm Al-Qura University, Makkah 21955, Saudi Arabia;
| | | |
Collapse
|
15
|
Al-Timemy AH, Mosa ZM, Alyasseri Z, Lavric A, Lui MM, Hazarbassanov RM, Yousefi S. A Hybrid Deep Learning Construct for Detecting Keratoconus From Corneal Maps. Transl Vis Sci Technol 2021; 10:16. [PMID: 34913952 PMCID: PMC8684312 DOI: 10.1167/tvst.10.14.16] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Purpose To develop and assess the accuracy of a hybrid deep learning construct for detecting keratoconus (KCN) based on corneal topographic maps. Methods We collected 3794 corneal images from 542 eyes of 280 subjects and developed seven deep learning models based on anterior and posterior eccentricity, anterior and posterior elevation, anterior and posterior sagittal curvature, and corneal thickness maps to extract deep corneal features. An independent subset with 1050 images collected from 150 eyes of 85 subjects from a separate center was used to validate models. We developed a hybrid deep learning model to detect KCN. We visualized deep features of corneal parameters to assess the quality of learning subjectively and computed area under the receiver operating characteristic curve (AUC), confusion matrices, accuracy, and F1 score to evaluate models objectively. Results In the development dataset, 204 eyes were normal, 123 eyes were suspected KCN, and 215 eyes had KCN. In the independent validation dataset, 50 eyes were normal, 50 eyes were suspected KCN, and 50 eyes were KCN. Images were annotated by three corneal specialists. The AUC of the models for the two-class and three-class problems based on the development set were 0.99 and 0.93, respectively. Conclusions The hybrid deep learning model achieved high accuracy in identifying KCN based on corneal maps and provided a time-efficient framework with low computational complexity. Translational Relevance Deep learning can detect KCN from non-invasive corneal images with high accuracy, suggesting potential application in research and clinical practice to identify KCN.
Collapse
Affiliation(s)
- Ali H Al-Timemy
- Biomedical Engineering Department, Al-Khwarizmi College of Engineering, University of Baghdad, Baghdad, Iraq.,Centre for Robotics and Neural Systems, Cognitive Institute, School of Engineering, Computing and Mathematics, Plymouth University, Plymouth, UK
| | | | - Zaid Alyasseri
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi, Malaysia.,ECE Department-Faculty of Engineering, University of Kufa, Najaf, Iraq
| | - Alexandru Lavric
- Computers, Electronics and Automation Department, Stefan cel Mare University of Suceava, Suceava, Bukovina, Romania
| | - Marcelo M Lui
- Hospital de Olhos-CRO, Guarulhos, São Paulo, São Paulo, Brazil
| | - Rossen M Hazarbassanov
- Hospital de Olhos-CRO, Guarulhos, São Paulo, São Paulo, Brazil.,Department of Ophthalmology and Visual Sciences, Paulista Medical School, Federal University of São Paulo, São Paulo, Brazil
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA.,Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
16
|
Santos C, de Aguiar MS, Welfer D, Belloni BM. Detection of Fundus Lesions through a Convolutional Neural Network in Patients with Diabetic Retinopathy. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2692-2695. [PMID: 34891806 DOI: 10.1109/embc46164.2021.9630075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Diabetic Retinopathy is a major cause of vision loss caused by retina lesions, including hard and soft exudates, microaneurysms, and hemorrhages. The development of a computational tool capable of detecting these lesions can assist in the early diagnosis of the most severe forms of the lesions and assist in the screening process and definition of the best treatment form. This paper proposes a computational model based on pre-trained convolutional neural networks capable of detecting fundus lesions to promote medical diagnosis support. The model was trained, adjusted, and evaluated using the DDR Diabetic Retinopathy dataset and implemented based on a YOLOv4 architecture and Darknet framework, reaching an mAP of 11.13% and a mIoU of 13.98%. The experimental results show that the proposed model presented results superior to those obtained in related works found in the literature.
Collapse
|
17
|
Huang C, Zong Y, Ding Y, Luo X, Clawson K, Peng Y. A new deep learning approach for the retinal hard exudates detection based on superpixel multi-feature extraction and patch-based CNN. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
18
|
Liu Q, Liu H, Zhao Y, Liang Y. Dual-Branch Network with Dual-Sampling Modulated Dice Loss for Hard Exudate Segmentation in Colour Fundus Images. IEEE J Biomed Health Inform 2021; 26:1091-1102. [PMID: 34460407 DOI: 10.1109/jbhi.2021.3108169] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automated segmentation of hard exudates in colour fundus images is a challenge task due to issues of extreme class imbalance and enormous size variation. This paper aims to tackle these issues and proposes a dual-branch network with dual-sampling modulated Dice loss. It consists of two branches: large hard exudate biased segmentation branch and small hard exudate biased segmentation branch. Both of them are responsible for their own duties separately. Furthermore, we propose a dual-sampling modulated Dice loss for the training such that our proposed dual-branch network is able to segment hard exudates in different sizes. In detail, for the first branch, we use a uniform sampler to sample pixels from predicted segmentation mask for Dice loss calculation, which leads to this branch naturally be biased in favour of large hard exudates as Dice loss generates larger cost on misidentification of large hard exudates than small hard exudates. For the second branch, we use a re-balanced sampler to oversample hard exudate pixels and undersample background pixels for loss calculation. In this way, cost on misidentification of small hard exudates is enlarged, which enforces the parameters in the second branch fit small hard exudates well. Considering that large hard exudates are much easier to be correctly identified than small hard exudates, we propose an easy-to-difficult learning strategy by adaptively modulating the losses of two branches. We evaluate our proposed method on two public datasets and results demonstrate that ours achieves state-of-the-art performance.
Collapse
|
19
|
Wu JH, Liu TYA, Hsu WT, Ho JHC, Lee CC. Performance and Limitation of Machine Learning Algorithms for Diabetic Retinopathy Screening: Meta-analysis. J Med Internet Res 2021; 23:e23863. [PMID: 34407500 PMCID: PMC8406115 DOI: 10.2196/23863] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 11/19/2020] [Accepted: 04/30/2021] [Indexed: 12/23/2022] Open
Abstract
Background Diabetic retinopathy (DR), whose standard diagnosis is performed by human experts, has high prevalence and requires a more efficient screening method. Although machine learning (ML)–based automated DR diagnosis has gained attention due to recent approval of IDx-DR, performance of this tool has not been examined systematically, and the best ML technique for use in a real-world setting has not been discussed. Objective The aim of this study was to systematically examine the overall diagnostic accuracy of ML in diagnosing DR of different categories based on color fundus photographs and to determine the state-of-the-art ML approach. Methods Published studies in PubMed and EMBASE were searched from inception to June 2020. Studies were screened for relevant outcomes, publication types, and data sufficiency, and a total of 60 out of 2128 (2.82%) studies were retrieved after study selection. Extraction of data was performed by 2 authors according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), and the quality assessment was performed according to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). Meta-analysis of diagnostic accuracy was pooled using a bivariate random effects model. The main outcomes included diagnostic accuracy, sensitivity, and specificity of ML in diagnosing DR based on color fundus photographs, as well as the performances of different major types of ML algorithms. Results The primary meta-analysis included 60 color fundus photograph studies (445,175 interpretations). Overall, ML demonstrated high accuracy in diagnosing DR of various categories, with a pooled area under the receiver operating characteristic (AUROC) ranging from 0.97 (95% CI 0.96-0.99) to 0.99 (95% CI 0.98-1.00). The performance of ML in detecting more-than-mild DR was robust (sensitivity 0.95; AUROC 0.97), and by subgroup analyses, we observed that robust performance of ML was not limited to benchmark data sets (sensitivity 0.92; AUROC 0.96) but could be generalized to images collected in clinical practice (sensitivity 0.97; AUROC 0.97). Neural network was the most widely used method, and the subgroup analysis revealed a pooled AUROC of 0.98 (95% CI 0.96-0.99) for studies that used neural networks to diagnose more-than-mild DR. Conclusions This meta-analysis demonstrated high diagnostic accuracy of ML algorithms in detecting DR on color fundus photographs, suggesting that state-of-the-art, ML-based DR screening algorithms are likely ready for clinical applications. However, a significant portion of the earlier published studies had methodology flaws, such as the lack of external validation and presence of spectrum bias. The results of these studies should be interpreted with caution.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| | - T Y Alvin Liu
- Retina Division, Wilmer Eye Institute, The Johns Hopkins Medicine, Baltimore, MD, United States
| | - Wan-Ting Hsu
- Harvard TH Chan School of Public Health, Boston, MA, United States
| | | | - Chien-Chang Lee
- Health Data Science Research Group, National Taiwan University Hospital, Taipei, Taiwan.,The Centre for Intelligent Healthcare, National Taiwan University Hospital, Taipei, Taiwan.,Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
20
|
Long H, Chen B, Li W, Xian Y, Peng Z. Blood glucose detection based on Teager-Kaiser main energy of photoacoustic signal. Comput Biol Med 2021; 134:104552. [PMID: 34144363 DOI: 10.1016/j.compbiomed.2021.104552] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 06/02/2021] [Accepted: 06/02/2021] [Indexed: 11/27/2022]
Abstract
Real-time blood glucose detection is an essential tool for diabetes monitoring. Non-invasive blood glucose detection technology is one of the current research hotspots in this field. Previous research mainly focused on improving the system's detection capability to obtain signals with low signal-to-noise ratio and high quality, and simple methods are often used in signal processing. Moreover, photoacoustic signal simulation also simplifies the influence of the transmission medium on the signal. In the present study, we built a new simulation model which considers human skin, blood, and the detector's limitations, to obtain a more practical photoacoustic signal. We then proposed a blood glucose detection algorithm based on Teager-Kaiser main energy (TKME) to overcome noise and medium interference and achieve a high detection accuracy at low SNR. Finally, the simulation and actual data were utilised during the experiment, and the detection error was 15 mg/dL (SNR = 10 dB).
Collapse
Affiliation(s)
- Hongfeng Long
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China; Laboratory of Imaging Detection and Intelligent Perception University of Electronic Science and Technology of China, 610054, Chengdu, China
| | - Bingzhang Chen
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China; Laboratory of Imaging Detection and Intelligent Perception University of Electronic Science and Technology of China, 610054, Chengdu, China.
| | - Wei Li
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China; Laboratory of Imaging Detection and Intelligent Perception University of Electronic Science and Technology of China, 610054, Chengdu, China
| | - Yongli Xian
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China; School of Electronic Engineering and Electronic Information, Xihua University, Chengdu, 610039, China
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China; Laboratory of Imaging Detection and Intelligent Perception University of Electronic Science and Technology of China, 610054, Chengdu, China.
| |
Collapse
|
21
|
A review of diabetic retinopathy: Datasets, approaches, evaluation metrics and future trends. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.06.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
22
|
Sun L, Wang Z, Pu H, Yuan G, Guo L, Pu T, Peng Z. Attention-embedded complementary-stream CNN for false positive reduction in pulmonary nodule detection. Comput Biol Med 2021; 133:104357. [PMID: 33836449 DOI: 10.1016/j.compbiomed.2021.104357] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 03/22/2021] [Accepted: 03/22/2021] [Indexed: 01/18/2023]
Abstract
False positive reduction plays a key role in computer-aided detection systems for pulmonary nodule detection in computed tomography (CT) scans. However, this remains a challenge owing to the heterogeneity and similarity of anisotropic pulmonary nodules. In this study, a novel attention-embedded complementary-stream convolutional neural network (AECS-CNN) is proposed to obtain more representative features of nodules for false positive reduction. The proposed network comprises three function blocks: 1) attention-guided multi-scale feature extraction, 2) complementary-stream block with an attention module for feature integration, and 3) classification block. The inputs of the network are multi-scale 3D CT volumes due to variations in nodule sizes. Subsequently, a gradual multi-scale feature extraction block with an attention module was applied to acquire more contextual information regarding the nodules. A subsequent complementary-stream integration block with an attention module was utilized to learn the significantly complementary features. Finally, the candidates were classified using a fully connected layer block. An exhaustive experiment on the LUNA16 challenge dataset was conducted to verify the effectiveness and performance of the proposed network. The AECS-CNN achieved a sensitivity of 0.92 with 4 false positives per scan. The results indicate that the attention mechanism can improve the network performance in false positive reduction, the proposed AECS-CNN can learn more representative features, and the attention module can guide the network to learn the discriminated feature channels and the crucial information embedded in the data, thereby effectively enhancing the performance of the detection system.
Collapse
Affiliation(s)
- Lingma Sun
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhuoran Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Hong Pu
- Sichuan Provincial People's Hospital, Chengdu, Sichuan, 610072, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
| | - Guohui Yuan
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lu Guo
- Sichuan Provincial People's Hospital, Chengdu, Sichuan, 610072, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
| | - Tian Pu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
23
|
NAIR ARUNT, MUTHUVEL K. AUTOMATED SCREENING OF DIABETIC RETINOPATHY WITH OPTIMIZED DEEP CONVOLUTIONAL NEURAL NETWORK: ENHANCED MOTH FLAME MODEL. J MECH MED BIOL 2021. [DOI: 10.1142/s0219519421500056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Nowadays, analysis on retinal image exists as one of the challenging area for study. Numerous retinal diseases could be recognized by analyzing the variations taking place in retina. However, the main disadvantage among those studies is that, they do not have higher recognition accuracy. The proposed framework includes four phases namely, (i) Blood Vessel Segmentation (ii) Feature Extraction (iii) Optimal Feature Selection and (iv) Classification. Initially, the input fundus image is subjected to blood vessel segmentation from which two binary thresholded images (one from High Pass Filter (HPF) and other from top-hat reconstruction) are acquired. These two images are differentiated and the areas that are common to both are said to be the major vessels and the left over regions are fused to form vessel sub-image. These vessel sub-images are classified with Gaussian Mixture Model (GMM) classifier and the resultant is summed up with the major vessels to form the segmented blood vessels. The segmented images are subjected to feature extraction process, where the features like proposed Local Binary Pattern (LBP), Gray-Level Co-Occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRM) are extracted. As the curse of dimensionality seems to be the greatest issue, it is important to select the appropriate features from the extracted one for classification. In this paper, a new improved optimization algorithm Moth Flame with New Distance Formulation (MF-NDF) is introduced for selecting the optimal features. Finally, the selected optimal features are subjected to Deep Convolutional Neural Network (DCNN) model for classification. Further, in order to make the precise diagnosis, the weights of DCNN are optimally tuned by the same optimization algorithm. The performance of the proposed algorithm will be compared against the conventional algorithms in terms of positive and negative measures.
Collapse
Affiliation(s)
- ARUN T NAIR
- Department of Electrical and Electronics Engineering, Noorul Islam Centre for Higher Education, Kumaracoil 629180, Tamil Nadu, India
| | - K. MUTHUVEL
- Department of Electrical and Electronics Engineering, Noorul Islam Centre for Higher Education, Kumaracoil 629180, Tamil Nadu, India
| |
Collapse
|
24
|
Exudates as Landmarks Identified through FCM Clustering in Retinal Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app11010142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The aim of this work was to develop a method for the automatic identification of exudates, using an unsupervised clustering approach. The ability to classify each pixel as belonging to an eventual exudate, as a warning of disease, allows for the tracking of a patient’s status through a noninvasive approach. In the field of diabetic retinopathy detection, we considered four public domain datasets (DIARETDB0/1, IDRID, and e-optha) as benchmarks. In order to refine the final results, a specialist ophthalmologist manually segmented a random selection of DIARETDB0/1 fundus images that presented exudates. An innovative pipeline of morphological procedures and fuzzy C-means clustering was integrated in order to extract exudates with a pixel-wise approach. Our methodology was optimized, and verified and the parameters were fine-tuned in order to define both suitable values and to produce a more accurate segmentation. The method was used on 100 tested images, resulting in averages of sensitivity, specificity, and accuracy equal to 83.3%, 99.2%, and 99.1%, respectively.
Collapse
|
25
|
Diabetic retinopathy detection through deep learning techniques: A review. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100377] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|