1
|
Pannu R, Zubair M, Owais M, Hassan S, Umair M, Usman SM, Albashrawi MA, Hussain I. Enhanced glaucoma classification through advanced segmentation by integrating cup-to-disc ratio and neuro-retinal rim features. Comput Med Imaging Graph 2025; 123:102559. [PMID: 40315660 DOI: 10.1016/j.compmedimag.2025.102559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2025] [Revised: 04/09/2025] [Accepted: 04/16/2025] [Indexed: 05/04/2025]
Abstract
Glaucoma is a progressive eye condition caused by high intraocular fluid pressure, damaging the optic nerve, leading to gradual, irreversible vision loss, often without noticeable symptoms. Subtle signs like mild eye redness, slightly blurred vision, and eye pain may go unnoticed, earning it the nickname "silent thief of sight." Its prevalence is rising with an aging population, driven by increased life expectancy. Most computer-aided diagnosis (CAD) systems rely on the cup-to-disc ratio (CDR) for glaucoma diagnosis. This study introduces a novel approach by integrating CDR with the neuro-retinal rim ratio (NRR), which quantifies rim thickness within the optic disc (OD). NRR enhances diagnostic accuracy by capturing additional optic nerve head changes, such as rim thinning and tissue loss, which were overlooked using CDR alone. A modified ResUNet architecture for OD and optic cup (OC) segmentation, combining residual learning and U-Net to capture spatial context for semantic segmentation. For OC segmentation, the model achieved Dice Coefficient (DC) scores of 0.942 and 0.872 and Intersection over Union (IoU) values of 0.891 and 0.773 for DRISHTI-GS and RIM-ONE, respectively. For OD segmentation, the model achieved DC of 0.972 and 0.950 and IoU values of 0.945 and 0.940 for DRISHTI-GS and RIM-ONE, respectively. External evaluation on ORIGA and REFUGE confirmed the model's robustness and generalizability. CDR and NRR were calculated from segmentation masks and used to train an SVM with a radial basis function, classifying the eyes as healthy or glaucomatous. The model achieved accuracies of 0.969 on DRISHTI-GS and 0.977 on RIM-ONE.
Collapse
Affiliation(s)
- Rabia Pannu
- Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Muhammad Zubair
- Interdisciplinary Research Center for Finance and Digital Economy, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia
| | - Muhammad Owais
- Khalifa University Center for Autonomous Robotic Systems (KUCARS) and Department of Mechanical & Nuclear Engineering, Khalifa University, Abu Dhabi, United Arab Emirates.
| | - Shoaib Hassan
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Muhammad Umair
- Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Syed Muhammad Usman
- Department of Computer Science, Bahria School of Engineering and Applied Sciences, Bahria University Islamabad, Pakistan
| | - Mousa Ahmed Albashrawi
- Interdisciplinary Research Center for Finance and Digital Economy, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia; Department of Information Systems and Operations Management, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia
| | - Irfan Hussain
- Khalifa University Center for Autonomous Robotic Systems (KUCARS) and Department of Mechanical & Nuclear Engineering, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
2
|
Martucci A, Gallo Afflitto G, Pocobelli G, Aiello F, Mancino R, Nucci C. Lights and Shadows on Artificial Intelligence in Glaucoma: Transforming Screening, Monitoring, and Prognosis. J Clin Med 2025; 14:2139. [PMID: 40217589 PMCID: PMC11989555 DOI: 10.3390/jcm14072139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2025] [Revised: 03/18/2025] [Accepted: 03/19/2025] [Indexed: 04/14/2025] Open
Abstract
Background/Objectives: Artificial intelligence (AI) is increasingly being integrated into medicine, including ophthalmology, owing to its strong capabilities in image recognition. Methods: This review focuses on the most recent key applications of AI in the diagnosis and management of, as well as research on, glaucoma by performing a systematic review of the latest papers in the literature. Results: In glaucoma, AI can help analyze large amounts of data from diagnostic tools, such as fundus images, optical coherence tomography scans, and visual field tests. Conclusions: AI technologies can enhance the accuracy of glaucoma diagnoses and could provide significant economic benefits by automating routine tasks, improving diagnostic accuracy, and enhancing access to care, especially in underserved areas. However, despite these promising results, challenges persist, including limited dataset size and diversity, class imbalance, the need to optimize models for early detection, and the integration of multimodal data into clinical practice. Currently, ophthalmologists are expected to continue playing a leading role in managing glaucomatous eyes and overseeing the development and validation of AI tools.
Collapse
Affiliation(s)
- Alessio Martucci
- Ophthalmology Unit, Department of Experimental Medicine, University of Rome “Tor Vergata”, 00133 Rome, Italy
| | | | | | | | | | | |
Collapse
|
3
|
Stuermer L, Braga S, Martin R, Wolffsohn JS. Artificial intelligence virtual assistants in primary eye care practice. Ophthalmic Physiol Opt 2025; 45:437-449. [PMID: 39723633 PMCID: PMC11823310 DOI: 10.1111/opo.13435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2024] [Revised: 12/15/2024] [Accepted: 12/16/2024] [Indexed: 12/28/2024]
Abstract
PURPOSE To propose a novel artificial intelligence (AI)-based virtual assistant trained on tabular clinical data that can provide decision-making support in primary eye care practice and optometry education programmes. METHOD Anonymised clinical data from 1125 complete optometric examinations (2250 eyes; 63% women, 37% men) were used to train different machine learning algorithm models to predict eye examination classification (refractive, binocular vision dysfunction, ocular disorder or any combination of these three options). After modelling, adjustment, mining and preprocessing (one-hot encoding and SMOTE techniques), 75 input (preliminary data, history, oculomotor test and ocular examinations) and three output (refractive, binocular vision status and eye disease) features were defined. The data were split into training (80%) and test (20%) sets. Five machine learning algorithms were trained, and the best algorithms were subjected to fivefold cross-validation. Model performance was evaluated for accuracy, precision, sensitivity, F1 score and specificity. RESULTS The random forest algorithm was the best for classifying eye examination results with a performance >95.2% (based on 35 input features from preliminary data and history), to propose a subclassification of ocular disorders with a performance >98.1% (based on 65 features from preliminary data, history and ocular examinations) and to differentiate binocular vision dysfunctions with a performance >99.7% (based on 30 features from preliminary data and oculomotor tests). These models were integrated into a responsive web application, available in three languages, allowing intuitive access to the AI models via conventional clinical terms. CONCLUSIONS An AI-based virtual assistant that performed well in predicting patient classification, eye disorders or binocular vision dysfunction has been developed with potential use in primary eye care practice and education programmes.
Collapse
Affiliation(s)
- Leandro Stuermer
- Department of OptometryUniversity of ContestadoCanoinhasBrazil
- Optometry Research Group, School of Optometry, IOBA Eye InstituteUniversity of ValladolidValladolidSpain
| | - Sabrina Braga
- Department of OptometryUniversity of ContestadoCanoinhasBrazil
- Optometry Research Group, School of Optometry, IOBA Eye InstituteUniversity of ValladolidValladolidSpain
| | - Raul Martin
- Optometry Research Group, School of Optometry, IOBA Eye InstituteUniversity of ValladolidValladolidSpain
- Departamento de Física Teórica, Atómica y ÓpticaUniversidad de ValladolidValladolidSpain
| | - James S. Wolffsohn
- Optometry and Vision Sciences Research GroupAston UniversityBirminghamUK
| |
Collapse
|
4
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
5
|
Akpinar MH, Sengur A, Faust O, Tong L, Molinari F, Acharya UR. Artificial intelligence in retinal screening using OCT images: A review of the last decade (2013-2023). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108253. [PMID: 38861878 DOI: 10.1016/j.cmpb.2024.108253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/22/2024] [Accepted: 05/25/2024] [Indexed: 06/13/2024]
Abstract
BACKGROUND AND OBJECTIVES Optical coherence tomography (OCT) has ushered in a transformative era in the domain of ophthalmology, offering non-invasive imaging with high resolution for ocular disease detection. OCT, which is frequently used in diagnosing fundamental ocular pathologies, such as glaucoma and age-related macular degeneration (AMD), plays an important role in the widespread adoption of this technology. Apart from glaucoma and AMD, we will also investigate pertinent pathologies, such as epiretinal membrane (ERM), macular hole (MH), macular dystrophy (MD), vitreomacular traction (VMT), diabetic maculopathy (DMP), cystoid macular edema (CME), central serous chorioretinopathy (CSC), diabetic macular edema (DME), diabetic retinopathy (DR), drusen, glaucomatous optic neuropathy (GON), neovascular AMD (nAMD), myopia macular degeneration (MMD) and choroidal neovascularization (CNV) diseases. This comprehensive review examines the role that OCT-derived images play in detecting, characterizing, and monitoring eye diseases. METHOD The 2020 PRISMA guideline was used to structure a systematic review of research on various eye conditions using machine learning (ML) or deep learning (DL) techniques. A thorough search across IEEE, PubMed, Web of Science, and Scopus databases yielded 1787 publications, of which 1136 remained after removing duplicates. Subsequent exclusion of conference papers, review papers, and non-open-access articles reduced the selection to 511 articles. Further scrutiny led to the exclusion of 435 more articles due to lower-quality indexing or irrelevance, resulting in 76 journal articles for the review. RESULTS During our investigation, we found that a major challenge for ML-based decision support is the abundance of features and the determination of their significance. In contrast, DL-based decision support is characterized by a plug-and-play nature rather than relying on a trial-and-error approach. Furthermore, we observed that pre-trained networks are practical and especially useful when working on complex images such as OCT. Consequently, pre-trained deep networks were frequently utilized for classification tasks. Currently, medical decision support aims to reduce the workload of ophthalmologists and retina specialists during routine tasks. In the future, it might be possible to create continuous learning systems that can predict ocular pathologies by identifying subtle changes in OCT images.
Collapse
Affiliation(s)
- Muhammed Halil Akpinar
- Department of Electronics and Automation, Vocational School of Technical Sciences, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Abdulkadir Sengur
- Electrical-Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Louis Tong
- Singapore Eye Research Institute, Singapore, Singapore
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
6
|
Christopher M, Hallaj S, Jiravarnsirikul A, Baxter SL, Zangwill LM. Novel Technologies in Artificial Intelligence and Telemedicine for Glaucoma Screening. J Glaucoma 2024; 33:S26-S32. [PMID: 38506792 DOI: 10.1097/ijg.0000000000002367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 01/22/2024] [Indexed: 03/21/2024]
Abstract
PURPOSE To provide an overview of novel technologies in telemedicine and artificial intelligence (AI) approaches for cost-effective glaucoma screening. METHODS/RESULTS A narrative review was performed by summarizing research results, recent developments in glaucoma detection and care, and considerations related to telemedicine and AI in glaucoma screening. Telemedicine and AI approaches provide the opportunity for novel glaucoma screening programs in primary care, optometry, portable, and home-based settings. These approaches offer several advantages for glaucoma screening, including increasing access to care, lowering costs, identifying patients in need of urgent treatment, and enabling timely diagnosis and early intervention. However, challenges remain in implementing these systems, including integration into existing clinical workflows, ensuring equity for patients, and meeting ethical and regulatory requirements. Leveraging recent work towards standardized data acquisition as well as tools and techniques developed for automated diabetic retinopathy screening programs may provide a model for a cost-effective approach to glaucoma screening. CONCLUSION Leveraging novel technologies and advances in telemedicine and AI-based approaches to glaucoma detection show promise for improving our ability to detect moderate and advanced glaucoma in primary care settings and target higher individuals at high risk for having the disease.
Collapse
Affiliation(s)
- Mark Christopher
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| | - Shahin Hallaj
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| | - Anuwat Jiravarnsirikul
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Department of Medicine, Division of Biomedical Informatics, University of California San Diego, La Jolla, CA
| | - Sally L Baxter
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
- Department of Medicine, Division of Biomedical Informatics, University of California San Diego, La Jolla, CA
| | - Linda M Zangwill
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| |
Collapse
|
7
|
Wang Z, Wang J, Zhang H, Yan C, Wang X, Wen X. Mstnet: method for glaucoma grading based on multimodal feature fusion of spatial relations. Phys Med Biol 2023; 68:245002. [PMID: 37857309 DOI: 10.1088/1361-6560/ad0520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 10/19/2023] [Indexed: 10/21/2023]
Abstract
Objective.The objective of this study is to develop an efficient multimodal learning framework for the classification of glaucoma. Glaucoma is a group of eye diseases that can result in vision loss and blindness, often due to delayed detection and treatment. Fundus images and optical coherence tomography (OCT) images have proven valuable for the diagnosis and management of glaucoma. However, current models that combine features from both modalities often lack efficient spatial relationship modeling.Approach.In this study, we propose an innovative approach to address the classification of glaucoma. We focus on leveraging the features of OCT volumes and harness the capabilities of transformer models to capture long-range spatial relationships. To achieve this, we introduce a 3D transformer model to extract features from OCT volumes, enhancing the model's effectiveness. Additionally, we employ downsampling techniques to enhance model efficiency. We then utilize the spatial feature relationships between OCT volumes and fundus images to fuse the features extracted from both sources.Main results.Our proposed framework has yielded remarkable results, particularly in terms of glaucoma grading performance. We conducted our experiments using the GAMMA dataset, and our approach outperformed traditional feature fusion methods. By effectively modeling spatial relationships and combining OCT volume and fundus map features, our framework achieved outstanding classification results.Significance.This research is of significant importance in the field of glaucoma diagnosis and management. Efficient and accurate glaucoma classification is essential for timely intervention and prevention of vision loss. Our proposed approach, which integrates 3D transformer models, offers a novel way to extract and fuse features from OCT volumes and fundus images, ultimately enhancing the effectiveness of glaucoma classification. This work has the potential to contribute to improved patient care, particularly in the early detection and treatment of glaucoma, thereby reducing the risk of vision impairment and blindness.
Collapse
Affiliation(s)
- Zhizhou Wang
- No. 209, University Street, Yuci District, Jinzhong City, Shanxi Province, People's Republic of China
| | - Jun Wang
- No. 209, University Street, Yuci District, Jinzhong City, Shanxi Province, People's Republic of China
| | - Hongru Zhang
- No. 209, University Street, Yuci District, Jinzhong City, Shanxi Province, People's Republic of China
| | - Chen Yan
- No. 209, University Street, Yuci District, Jinzhong City, Shanxi Province, People's Republic of China
| | - Xingkui Wang
- No. 209, University Street, Yuci District, Jinzhong City, Shanxi Province, People's Republic of China
| | - Xin Wen
- No. 209, University Street, Yuci District, Jinzhong City, Shanxi Province, People's Republic of China
| |
Collapse
|
8
|
Hussain S, Chua J, Wong D, Lo J, Kadziauskiene A, Asoklis R, Barbastathis G, Schmetterer L, Yong L. Predicting glaucoma progression using deep learning framework guided by generative algorithm. Sci Rep 2023; 13:19960. [PMID: 37968437 PMCID: PMC10651936 DOI: 10.1038/s41598-023-46253-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 10/30/2023] [Indexed: 11/17/2023] Open
Abstract
Glaucoma is a slowly progressing optic neuropathy that may eventually lead to blindness. To help patients receive customized treatment, predicting how quickly the disease will progress is important. Structural assessment using optical coherence tomography (OCT) can be used to visualize glaucomatous optic nerve and retinal damage, while functional visual field (VF) tests can be used to measure the extent of vision loss. However, VF testing is patient-dependent and highly inconsistent, making it difficult to track glaucoma progression. In this work, we developed a multimodal deep learning model comprising a convolutional neural network (CNN) and a long short-term memory (LSTM) network, for glaucoma progression prediction. We used OCT images, VF values, demographic and clinical data of 86 glaucoma patients with five visits over 12 months. The proposed method was used to predict VF changes 12 months after the first visit by combining past multimodal inputs with synthesized future images generated using generative adversarial network (GAN). The patients were classified into two classes based on their VF mean deviation (MD) decline: slow progressors (< 3 dB) and fast progressors (> 3 dB). We showed that our generative model-based novel approach can achieve the best AUC of 0.83 for predicting the progression 6 months earlier. Further, the use of synthetic future images enabled the model to accurately predict the vision loss even earlier (9 months earlier) with an AUC of 0.81, compared to using only structural (AUC = 0.68) or only functional measures (AUC = 0.72). This study provides valuable insights into the potential of using synthetic follow-up OCT images for early detection of glaucoma progression.
Collapse
Affiliation(s)
- Shaista Hussain
- Institute of High Performance Computing, A*STAR, Singapore, Singapore.
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Damon Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | | | - Aiste Kadziauskiene
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Department of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - Rimvydas Asoklis
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Department of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore.
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria.
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| | - Liu Yong
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| |
Collapse
|
9
|
Mariottoni EB, Datta S, Shigueoka LS, Jammal AA, Tavares IM, Henao R, Carin L, Medeiros FA. Deep Learning-Assisted Detection of Glaucoma Progression in Spectral-Domain OCT. Ophthalmol Glaucoma 2023; 6:228-238. [PMID: 36410708 PMCID: PMC10278200 DOI: 10.1016/j.ogla.2022.11.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/24/2022] [Accepted: 11/09/2022] [Indexed: 05/26/2023]
Abstract
PURPOSE To develop and validate a deep learning (DL) model for detection of glaucoma progression using spectral-domain (SD)-OCT measurements of retinal nerve fiber layer (RNFL) thickness. DESIGN Retrospective cohort study. PARTICIPANTS A total of 14 034 SD-OCT scans from 816 eyes from 462 individuals. METHODS A DL convolutional neural network was trained to assess SD-OCT RNFL thickness measurements of 2 visits (a baseline and a follow-up visit) along with time between visits to predict the probability of glaucoma progression. The ground truth was defined by consensus from subjective grading by glaucoma specialists. Diagnostic performance was summarized by the area under the receiver operator characteristic curve (AUC), sensitivity, and specificity, and was compared with conventional trend-based analyses of change. Interval likelihood ratios were calculated to determine the impact of DL model results in changing the post-test probability of progression. MAIN OUTCOME MEASURES The AUC, sensitivity, and specificity of the DL model. RESULTS The DL model had an AUC of 0.938 (95% confidence interval [CI], 0.921-0.955), with sensitivity of 87.3% (95% CI, 83.6%-91.6%) and specificity of 86.4% (95% CI, 79.9%-89.6%). When matched for the same specificity, the DL model significantly outperformed trend-based analyses. Likelihood ratios for the DL model were associated with large changes in the probability of progression in the vast majority of SD-OCT tests. CONCLUSIONS A DL model was able to assess the probability of glaucomatous structural progression from SD-OCT RNFL thickness measurements. The model agreed well with expert judgments and outperformed conventional trend-based analyses of change, while also providing indication of the likely locations of change. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Eduardo B Mariottoni
- Vision, Imaging, and Performance (VIP) Laboratory, Duke Eye Center, Duke University, Durham, North Carolina; Department of Ophthalmology, Federal University of São Paulo, São Paulo, Brazil
| | - Shounak Datta
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina
| | - Leonardo S Shigueoka
- Vision, Imaging, and Performance (VIP) Laboratory, Duke Eye Center, Duke University, Durham, North Carolina
| | - Alessandro A Jammal
- Vision, Imaging, and Performance (VIP) Laboratory, Duke Eye Center, Duke University, Durham, North Carolina
| | - Ivan M Tavares
- Department of Ophthalmology, Federal University of São Paulo, São Paulo, Brazil
| | - Ricardo Henao
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina
| | - Felipe A Medeiros
- Vision, Imaging, and Performance (VIP) Laboratory, Duke Eye Center, Duke University, Durham, North Carolina; Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina.
| |
Collapse
|
10
|
Gutierrez A, Chen TC. Artificial intelligence in glaucoma: posterior segment optical coherence tomography. Curr Opin Ophthalmol 2023; 34:245-254. [PMID: 36728784 PMCID: PMC10090343 DOI: 10.1097/icu.0000000000000934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
PURPOSE OF REVIEW To summarize the recent literature on deep learning (DL) model applications in glaucoma detection and surveillance using posterior segment optical coherence tomography (OCT) imaging. RECENT FINDINGS DL models use OCT derived parameters including retinal nerve fiber layer (RNFL) scans, macular scans, and optic nerve head (ONH) scans, as well as a combination of these parameters, to achieve high diagnostic accuracy in detecting glaucomatous optic neuropathy (GON). Although RNFL segmentation is the most widely used OCT parameter for glaucoma detection by ophthalmologists, newer DL models most commonly use a combination of parameters, which provide a more comprehensive approach. Compared to DL models for diagnosing glaucoma, DL models predicting glaucoma progression are less commonly studied but have also been developed. SUMMARY DL models offer time-efficient, objective, and potential options in the management of glaucoma. Although artificial intelligence models have already been commercially accepted as diagnostic tools for other ophthalmic diseases, there is no commercially approved DL tool for the diagnosis of glaucoma, most likely in part due to the lack of a universal definition of glaucoma defined by OCT derived parameters alone (see Supplemental Digital Content 1 for video abstract, http://links.lww.com/COOP/A54 ).
Collapse
Affiliation(s)
- Alfredo Gutierrez
- Tufts School of Medicine
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Glaucoma Service
| | - Teresa C. Chen
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Glaucoma Service
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
11
|
Sunija AP, Gopi VP, Krishna AK. D-DAGNet: AN IMPROVED HYBRID DEEP NETWORK FOR AUTOMATED CLASSIFICATION OF GLAUCOMA FROM OCT IMAGES. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2023; 35. [DOI: 10.4015/s1016237222500429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
The introduction of Optical Coherence Tomography (OCT) in ophthalmology has resulted in significant progress in the early detection of glaucoma. Traditional approaches to identifying retinal diseases comprise an analysis of medical history and manual assessment of retinal images. Manual diagnosis is time-consuming and requires considerable human expertise, without which, errors could be costly to human sight. The use of artificial intelligence such as machine learning techniques in image analysis has been gaining ground in recent years for accurate, fast and cost-effective diagnosis from retinal images. This work proposes a Directed Acyclic Graph (DAG) network that combines Depthwise Convolution (DC) to decisively recognize early-stage retinal glaucoma from OCT images. The proposed method leverages the benefits of both depthwise convolution and DAG. The Convolutional Neural Network (CNN) information obtained in the proposed architecture is processed as per the partial order over the nodes. The Grad-CAM method is adopted to quantify and visualize normal and glaucomatous OCT heatmaps to improve diagnostic interpretability. The experiments were performed on LFH_Glaucoma dataset composed of 1105 glaucoma and 1049 healthy OCT scans. The proposed faster hybrid Depthwise-Directed Acyclic Graph Network (D-DAGNet) achieved an accuracy of 0.9995, precision of 0.9989, recall of 1.0, F1-score of 0.9994 and AUC of 0.9995 with only 0.0047 M learnable parameters. Hybrid D-DAGNet enhances network training efficacy and significantly reduces learnable parameters required for identification of the features of interest. The proposed network overcomes the problems of overfitting and performance degradation due to accretion of layers in the deep network, and is thus useful for real-time identification of glaucoma features from retinal OCT images.
Collapse
Affiliation(s)
- A. P. Sunija
- Department of Electronics and Communication Engineering, National Institute of Technology, Tiruchirappalli, 620015, Tamil Nadu, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology, Tiruchirappalli, 620015, Tamil Nadu, India
| | - Adithya K. Krishna
- Department of Electronics and Communication Engineering, National Institute of Technology, Tiruchirappalli, 620015, Tamil Nadu, India
| |
Collapse
|
12
|
Thompson AC, Falconi A, Sappington RM. Deep learning and optical coherence tomography in glaucoma: Bridging the diagnostic gap on structural imaging. FRONTIERS IN OPHTHALMOLOGY 2022; 2:937205. [PMID: 38983522 PMCID: PMC11182271 DOI: 10.3389/fopht.2022.937205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 08/22/2022] [Indexed: 07/11/2024]
Abstract
Glaucoma is a leading cause of progressive blindness and visual impairment worldwide. Microstructural evidence of glaucomatous damage to the optic nerve head and associated tissues can be visualized using optical coherence tomography (OCT). In recent years, development of novel deep learning (DL) algorithms has led to innovative advances and improvements in automated detection of glaucomatous damage and progression on OCT imaging. DL algorithms have also been trained utilizing OCT data to improve detection of glaucomatous damage on fundus photography, thus improving the potential utility of color photos which can be more easily collected in a wider range of clinical and screening settings. This review highlights ten years of contributions to glaucoma detection through advances in deep learning models trained utilizing OCT structural data and posits future directions for translation of these discoveries into the field of aging and the basic sciences.
Collapse
Affiliation(s)
- Atalie C. Thompson
- Department of Surgical Ophthalmology, Wake Forest School of Medicine, Winston Salem, NC, United States
- Department of Internal Medicine, Gerontology, and Geriatric Medicine, Wake Forest School of Medicine, Winston Salem, NC, United States
| | - Aurelio Falconi
- Wake Forest School of Medicine, Winston Salem, NC, United States
| | - Rebecca M. Sappington
- Department of Surgical Ophthalmology, Wake Forest School of Medicine, Winston Salem, NC, United States
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston Salem, NC, United States
| |
Collapse
|
13
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
14
|
Singh LK, Garg H, Khanna M. Performance evaluation of various deep learning based models for effective glaucoma evaluation using optical coherence tomography images. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:27737-27781. [PMID: 35368855 PMCID: PMC8962290 DOI: 10.1007/s11042-022-12826-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 02/20/2022] [Accepted: 03/09/2022] [Indexed: 06/14/2023]
Abstract
Glaucoma is the dominant reason for irreversible blindness worldwide, and its best remedy is early and timely detection. Optical coherence tomography has come to be the most commonly used imaging modality in detecting glaucomatous damage in recent years. Deep Learning using Optical Coherence Tomography Modality helps in predicting glaucoma more accurately and less tediously. This experimental study aims to perform glaucoma prediction using eight different ImageNet models from Optical Coherence Tomography of Glaucoma. A thorough investigation is performed to evaluate these models' performances on various efficiency metrics, which will help discover the best performing model. Every net is tested on three different optimizers, namely Adam, Root Mean Squared Propagation, and Stochastic Gradient Descent, to find the best relevant results. An attempt has been made to improvise the performance of models using transfer learning and fine-tuning. The work presented in this study was initially trained and tested on a private database that consists of 4220 images (2110 normal optical coherence tomography and 2110 glaucoma optical coherence tomography). Based on the results, the four best-performing models are shortlisted. Later, these models are tested on the well-recognized standard public Mendeley dataset. Experimental results illustrate that VGG16 using the Root Mean Squared Propagation Optimizer attains auspicious performance with 95.68% accuracy. The proposed work concludes that different ImageNet models are a good alternative as a computer-based automatic glaucoma screening system. This fully automated system has a lot of potential to tell the difference between normal Optical Coherence Tomography and glaucomatous Optical Coherence Tomography automatically. The proposed system helps in efficiently detecting this retinal infection in suspected patients for better diagnosis to avoid vision loss and also decreases senior ophthalmologists' (experts) precious time and involvement.
Collapse
Affiliation(s)
- Law Kumar Singh
- Department of Computer Science and Engineering, Sharda University , Greater Noida, India
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| | - Hitendra Garg
- Department of Computer Engineering and Applications, GLA University, Mathura, India
| | - Munish Khanna
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| |
Collapse
|
15
|
Shamsi F, Liu R, Owsley C, Kwon M. Identifying the Retinal Layers Linked to Human Contrast Sensitivity Via Deep Learning. Invest Ophthalmol Vis Sci 2022; 63:27. [PMID: 35179554 PMCID: PMC8859491 DOI: 10.1167/iovs.63.2.27] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 01/31/2022] [Indexed: 12/18/2022] Open
Abstract
Purpose Luminance contrast is the fundamental building block of human spatial vision. Therefore contrast sensitivity, the reciprocal of contrast threshold required for target detection, has been a barometer of human visual function. Although retinal ganglion cells (RGCs) are known to be involved in contrast coding, it still remains unknown whether the retinal layers containing RGCs are linked to a person's contrast sensitivity (e.g., Pelli-Robson contrast sensitivity) and, if so, to what extent the retinal layers are related to behavioral contrast sensitivity. Thus the current study aims to identify the retinal layers and features critical for predicting a person's contrast sensitivity via deep learning. Methods Data were collected from 225 subjects including individuals with either glaucoma, age-related macular degeneration, or normal vision. A deep convolutional neural network trained to predict a person's Pelli-Robson contrast sensitivity from structural retinal images measured with optical coherence tomography was used. Then, activation maps that represent the critical features learned by the network for the output prediction were computed. Results The thickness of both ganglion cell and inner plexiform layers, reflecting RGC counts, were found to be significantly correlated with contrast sensitivity (r = 0.26 ∼ 0.58, Ps < 0.001 for different eccentricities). Importantly, the results showed that retinal layers containing RGCs were the critical features the network uses to predict a person's contrast sensitivity (an average R2 = 0.36 ± 0.10). Conclusions The findings confirmed the structure and function relationship for contrast sensitivity while highlighting the role of RGC density for human contrast sensitivity.
Collapse
Affiliation(s)
- Foroogh Shamsi
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
| | - Rong Liu
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
- Department of life science and medicine, University of Science and Technology of China, Hefei, China
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| | - MiYoung Kwon
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| |
Collapse
|
16
|
García G, Del Amor R, Colomer A, Verdú-Monedero R, Morales-Sánchez J, Naranjo V. Circumpapillary OCT-focused hybrid learning for glaucoma grading using tailored prototypical neural networks. Artif Intell Med 2021; 118:102132. [PMID: 34412848 DOI: 10.1016/j.artmed.2021.102132] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 06/21/2021] [Accepted: 06/23/2021] [Indexed: 12/22/2022]
Abstract
Glaucoma is one of the leading causes of blindness worldwide and Optical Coherence Tomography (OCT) is the quintessential imaging technique for its detection. Unlike most of the state-of-the-art studies focused on glaucoma detection, in this paper, we propose, for the first time, a novel framework for glaucoma grading using raw circumpapillary B-scans. In particular, we set out a new OCT-based hybrid network which combines hand-driven and deep learning algorithms. An OCT-specific descriptor is proposed to extract hand-crafted features related to the retinal nerve fibre layer (RNFL). In parallel, an innovative CNN is developed using skip-connections to include tailored residual and attention modules to refine the automatic features of the latent space. The proposed architecture is used as a backbone to conduct a novel few-shot learning based on static and dynamic prototypical networks. The k-shot paradigm is redefined giving rise to a supervised end-to-end system which provides substantial improvements discriminating between healthy, early and advanced glaucoma samples. The training and evaluation processes of the dynamic prototypical network are addressed from two fused databases acquired via Heidelberg Spectralis system. Validation and testing results reach a categorical accuracy of 0.9459 and 0.8788 for glaucoma grading, respectively. Besides, the high performance reported by the proposed model for glaucoma detection deserves a special mention. The findings from the class activation maps are directly in line with the clinicians' opinion since the heatmaps pointed out the RNFL as the most relevant structure for glaucoma diagnosis.
Collapse
Affiliation(s)
- Gabriel García
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 Valencia, Spain.
| | - Rocío Del Amor
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 Valencia, Spain
| | - Adrián Colomer
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 Valencia, Spain
| | - Rafael Verdú-Monedero
- Departamento de Tecnologías de la Información y las Comunicaciones, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - Juan Morales-Sánchez
- Departamento de Tecnologías de la Información y las Comunicaciones, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - Valery Naranjo
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 Valencia, Spain
| |
Collapse
|