1
|
Kankrale R, Kokare M. Artificial intelligence in retinal image analysis for hypertensive retinopathy diagnosis: a comprehensive review and perspective. Vis Comput Ind Biomed Art 2025; 8:11. [PMID: 40307650 PMCID: PMC12044089 DOI: 10.1186/s42492-025-00194-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 03/27/2025] [Indexed: 05/02/2025] Open
Abstract
Hypertensive retinopathy (HR) occurs when the choroidal vessels, which form the photosensitive layer at the back of the eye, are injured owing to high blood pressure. Artificial intelligence (AI) in retinal image analysis (RIA) for HR diagnosis involves the use of advanced computational algorithms and machine learning (ML) strategies to recognize and evaluate signs of HR in retinal images automatically. This review aims to advance the field of HR diagnosis by investigating the latest ML and deep learning techniques, and highlighting their efficacy and capability for early diagnosis and intervention. By analyzing recent advancements and emerging trends, this study seeks to inspire further innovation in automated RIA. In this context, AI shows significant potential for enhancing the accuracy, effectiveness, and consistency of HR diagnoses. This will eventually lead to better clinical results by enabling earlier intervention and precise management of the condition. Overall, the integration of AI into RIA represents a considerable step forward in the early identification and treatment of HR, offering substantial benefits to both healthcare providers and patients.
Collapse
Affiliation(s)
- Rajendra Kankrale
- Department of Computer Science and Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra 431606, India.
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra 431606, India
| |
Collapse
|
2
|
Singh Parmar UP, Surico PL, Singh RB, Romano F, Salati C, Spadea L, Musa M, Gagliano C, Mori T, Zeppieri M. Künstliche Intelligenz (KI) zur Früherkennung von Netzhauterkrankungen. KOMPASS OPHTHALMOLOGIE 2025:1-8. [DOI: 10.1159/000546000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/30/2025]
Abstract
Künstliche Intelligenz (KI) hat sich zu einem transformativen Werkzeug auf dem Gebiet der Augenheilkunde entwickelt und revolutioniert die Diagnose und Behandlung von Krankheiten. Diese Arbeit gibt einen umfassenden Überblick über KI-Anwendungen bei verschiedenen Netzhauterkrankungen und zeigt ihr Potenzial, die Effizienz von Vorsorgeuntersuchungen zu erhöhen, Frühdiagnosen zu erleichtern und die Patientenergebnisse zu verbessern. Wir erklären die grundlegenden Konzepte der KI, einschließlich des maschinellen Lernens (ML) und des Deep Learning (DL), und deren Anwendung in der Augenheilkunde und heben die Bedeutung von KI-basierten Lösungen bei der Bewältigung der Komplexität und Variabilität von Netzhauterkrankungen hervor. Wir gehen auch auf spezifische Anwendungen der KI im Zusammenhang mit Netzhauterkrankungen wie diabetischer Retinopathie (DR), altersbedingter Makuladegeneration (AMD), makulärer Neovaskularisation, Frühgeborenen-Retinopathie (ROP), retinalem Venenverschluss (RVO), hypertensiver Retinopathie (HR), Retinopathia pigmentosa, Morbus Stargardt, Morbus Best (Best’sche vitelliforme Makuladystrophie) und Sichelzellenretinopathie ein. Wir konzentrieren uns auf die aktuelle Landschaft der KI-Technologien, einschließlich verschiedener KI-Modelle, ihrer Leistungsmetriken und klinischen Implikationen. Darüber hinaus befassen wir uns mit den Herausforderungen und Schwierigkeiten bei der Integration von KI in die klinische Praxis, einschließlich des «Black-Box-Phänomens», der Verzerrungen bei der Darstellung von Daten und der Einschränkungen im Zusammenhang mit der ganzheitlichen Bewertung von Patienten. Abschließend wird die kollaborative Rolle der KI an der Seite des medizinischen Fachpersonals hervorgehoben, wobei ein synergetischer Ansatz für die Erbringung von Gesundheitsdienstleistungen befürwortet wird. Es wird betont, wie wichtig es ist, KI als Ergänzung und nicht als Ersatz für menschliche Expertise einzusetzen, um ihr Potenzial zu maximieren, die Gesundheitsversorgung zu revolutionieren, Ungleichheiten in der Gesundheitsversorgung zu verringern und die Patientenergebnisse in der sich entwickelnden medizinischen Landschaft zu verbessern.
Collapse
|
3
|
Lendzioszek M, Bryl A, Poppe E, Zorena K, Mrugacz M. Retinal Vein Occlusion-Background Knowledge and Foreground Knowledge Prospects-A Review. J Clin Med 2024; 13:3950. [PMID: 38999513 PMCID: PMC11242360 DOI: 10.3390/jcm13133950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/28/2024] [Accepted: 07/03/2024] [Indexed: 07/14/2024] Open
Abstract
Thrombosis of retinal veins is one of the most common retinal vascular diseases that may lead to vascular blindness. The latest epidemiological data leave no illusions that the burden on the healthcare system, as impacted by patients with this diagnosis, will increase worldwide. This obliges scientists to search for new therapeutic and diagnostic options. In the 21st century, there has been tremendous progress in retinal imaging techniques, which has facilitated a better understanding of the mechanisms related to the development of retinal vein occlusion (RVO) and its complications, and consequently has enabled the introduction of new treatment methods. Moreover, artificial intelligence (AI) is likely to assist in selecting the best treatment option for patients in the near future. The aim of this comprehensive review is to re-evaluate the old but still relevant data on the RVO and confront them with new studies. The paper will provide a detailed overview of diagnosis, current treatment, prevention, and future therapeutic possibilities regarding RVO, as well as clarifying the mechanism of macular edema in this disease entity.
Collapse
Affiliation(s)
- Maja Lendzioszek
- Department of Ophthalmology, Voivodship Hospital, 18-400 Lomza, Poland
| | - Anna Bryl
- Department of Ophthalmology and Eye Rehabilitation, Medical University of Bialystok, 15-089 Bialystok, Poland
| | - Ewa Poppe
- Department of Ophthalmology, Voivodship Hospital, 18-400 Lomza, Poland
| | - Katarzyna Zorena
- Department of Immunobiology and Environment Microbiology, Medical University of Gdansk, Dębinki 7, 80-211 Gdansk, Poland
| | - Malgorzata Mrugacz
- Department of Ophthalmology and Eye Rehabilitation, Medical University of Bialystok, 15-089 Bialystok, Poland
| |
Collapse
|
4
|
Matloob Abbasi M, Iqbal S, Aurangzeb K, Alhussein M, Khan TM. LMBiS-Net: A lightweight bidirectional skip connection based multipath CNN for retinal blood vessel segmentation. Sci Rep 2024; 14:15219. [PMID: 38956117 PMCID: PMC11219784 DOI: 10.1038/s41598-024-63496-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 05/29/2024] [Indexed: 07/04/2024] Open
Abstract
Blinding eye diseases are often related to changes in retinal structure, which can be detected by analysing retinal blood vessels in fundus images. However, existing techniques struggle to accurately segment these delicate vessels. Although deep learning has shown promise in medical image segmentation, its reliance on specific operations can limit its ability to capture crucial details such as the edges of the vessel. This paper introduces LMBiS-Net, a lightweight convolutional neural network designed for the segmentation of retinal vessels. LMBiS-Net achieves exceptional performance with a remarkably low number of learnable parameters (only 0.172 million). The network used multipath feature extraction blocks and incorporates bidirectional skip connections for the information flow between the encoder and decoder. In addition, we have optimised the efficiency of the model by carefully selecting the number of filters to avoid filter overlap. This optimisation significantly reduces training time and improves computational efficiency. To assess LMBiS-Net's robustness and ability to generalise to unseen data, we conducted comprehensive evaluations on four publicly available datasets: DRIVE, STARE, CHASE_DB1, and HRF The proposed LMBiS-Net achieves significant performance metrics in various datasets. It obtains sensitivity values of 83.60%, 84.37%, 86.05%, and 83.48%, specificity values of 98.83%, 98.77%, 98.96%, and 98.77%, accuracy (acc) scores of 97.08%, 97.69%, 97.75%, and 96.90%, and AUC values of 98.80%, 98.82%, 98.71%, and 88.77% on the DRIVE, STARE, CHEASE_DB, and HRF datasets, respectively. In addition, it records F1 scores of 83.43%, 84.44%, 83.54%, and 78.73% on the same datasets. Our evaluations demonstrate that LMBiS-Net achieves high segmentation accuracy (acc) while exhibiting both robustness and generalisability across various retinal image datasets. This combination of qualities makes LMBiS-Net a promising tool for various clinical applications.
Collapse
Affiliation(s)
- Mufassir Matloob Abbasi
- Department of Electrical Engineering, Abasyn University Islamabad Campus (AUIC), Islamabad, 44000, Pakistan
| | - Shahzaib Iqbal
- Department of Electrical Engineering, Abasyn University Islamabad Campus (AUIC), Islamabad, 44000, Pakistan.
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, P. O. Box 51178, 11543, Saudi Arabia
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, P. O. Box 51178, 11543, Saudi Arabia
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
5
|
Hou JM, Lee CK, Lin YA, Tseng PH. A Semi-Supervised Retinal Vessel Segmentation Method via Adaptive Uncertainty Estimation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-5. [PMID: 40039152 DOI: 10.1109/embc53108.2024.10782670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
We introduce a semi-supervised vessel segmentation technique that leverages a minimal amount of labeled data alongside substantial unlabeled data. This method addresses the limitations of supervised learning in medical image segmentation, which typically requires labor-intensive pixel-level labeling by experts. Using semi-supervised learning, our proposed adaptive uncertainty estimation (AUE) method enhances model performance through pixel-level uncertainty estimation and adaptive thresholding. This technique improves predictive accuracy by preserving high-confidence pixels between teacher-student networks, thereby effectively utilizing unlabeled data to acquire new features. Our approach surpasses both supervised and other semi-supervised models in accuracy on the STARE public retinal dataset. We have also benchmarked against several advanced semi-supervised segmentation methods, with our method achieving the best performance.
Collapse
|
6
|
Iqbal S, Khan TM, Naqvi SS, Naveed A, Usman M, Khan HA, Razzak I. LDMRes-Net: A Lightweight Neural Network for Efficient Medical Image Segmentation on IoT and Edge Devices. IEEE J Biomed Health Inform 2024; 28:3860-3871. [PMID: 37938951 DOI: 10.1109/jbhi.2023.3331278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
In this study, we propose LDMRes-Net, a lightweight dual-multiscale residual block-based convolutional neural network tailored for medical image segmentation on IoT and edge platforms. Conventional U-Net-based models face challenges in meeting the speed and efficiency demands of real-time clinical applications, such as disease monitoring, radiation therapy, and image-guided surgery. In this study, we present the Lightweight Dual Multiscale Residual Block-based Convolutional Neural Network (LDMRes-Net), which is specifically designed to overcome these difficulties. LDMRes-Net overcomes these limitations with its remarkably low number of learnable parameters (0.072 M), making it highly suitable for resource-constrained devices. The model's key innovation lies in its dual multiscale residual block architecture, which enables the extraction of refined features on multiple scales, enhancing overall segmentation performance. To further optimize efficiency, the number of filters is carefully selected to prevent overlap, reduce training time, and improve computational efficiency. The study includes comprehensive evaluations, focusing on the segmentation of the retinal image of vessels and hard exudates crucial for the diagnosis and treatment of ophthalmology. The results demonstrate the robustness, generalizability, and high segmentation accuracy of LDMRes-Net, positioning it as an efficient tool for accurate and rapid medical image segmentation in diverse clinical applications, particularly on IoT and edge platforms. Such advances hold significant promise for improving healthcare outcomes and enabling real-time medical image analysis in resource-limited settings.
Collapse
|
7
|
Parmar UPS, Surico PL, Singh RB, Romano F, Salati C, Spadea L, Musa M, Gagliano C, Mori T, Zeppieri M. Artificial Intelligence (AI) for Early Diagnosis of Retinal Diseases. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:527. [PMID: 38674173 PMCID: PMC11052176 DOI: 10.3390/medicina60040527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/12/2024] [Accepted: 03/21/2024] [Indexed: 04/28/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative tool in the field of ophthalmology, revolutionizing disease diagnosis and management. This paper provides a comprehensive overview of AI applications in various retinal diseases, highlighting its potential to enhance screening efficiency, facilitate early diagnosis, and improve patient outcomes. Herein, we elucidate the fundamental concepts of AI, including machine learning (ML) and deep learning (DL), and their application in ophthalmology, underscoring the significance of AI-driven solutions in addressing the complexity and variability of retinal diseases. Furthermore, we delve into the specific applications of AI in retinal diseases such as diabetic retinopathy (DR), age-related macular degeneration (AMD), Macular Neovascularization, retinopathy of prematurity (ROP), retinal vein occlusion (RVO), hypertensive retinopathy (HR), Retinitis Pigmentosa, Stargardt disease, best vitelliform macular dystrophy, and sickle cell retinopathy. We focus on the current landscape of AI technologies, including various AI models, their performance metrics, and clinical implications. Furthermore, we aim to address challenges and pitfalls associated with the integration of AI in clinical practice, including the "black box phenomenon", biases in data representation, and limitations in comprehensive patient assessment. In conclusion, this review emphasizes the collaborative role of AI alongside healthcare professionals, advocating for a synergistic approach to healthcare delivery. It highlights the importance of leveraging AI to augment, rather than replace, human expertise, thereby maximizing its potential to revolutionize healthcare delivery, mitigate healthcare disparities, and improve patient outcomes in the evolving landscape of medicine.
Collapse
Affiliation(s)
| | - Pier Luigi Surico
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
| | - Rohan Bir Singh
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Francesco Romano
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Carlo Salati
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Leopoldo Spadea
- Eye Clinic, Policlinico Umberto I, “Sapienza” University of Rome, 00142 Rome, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Caterina Gagliano
- Faculty of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Tommaso Mori
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
- Department of Ophthalmology, University of California San Diego, La Jolla, CA 92122, USA
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| |
Collapse
|
8
|
Ji YK, Hua RR, Liu S, Xie CJ, Zhang SC, Yang WH. Intelligent diagnosis of retinal vein occlusion based on color fundus photographs. Int J Ophthalmol 2024; 17:1-6. [PMID: 38239946 PMCID: PMC10754666 DOI: 10.18240/ijo.2024.01.01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 10/17/2023] [Indexed: 01/22/2024] Open
Abstract
AIM To develop an artificial intelligence (AI) diagnosis model based on deep learning (DL) algorithm to diagnose different types of retinal vein occlusion (RVO) by recognizing color fundus photographs (CFPs). METHODS Totally 914 CFPs of healthy people and patients with RVO were collected as experimental data sets, and used to train, verify and test the diagnostic model of RVO. All the images were divided into four categories [normal, central retinal vein occlusion (CRVO), branch retinal vein occlusion (BRVO), and macular retinal vein occlusion (MRVO)] by three fundus disease experts. Swin Transformer was used to build the RVO diagnosis model, and different types of RVO diagnosis experiments were conducted. The model's performance was compared to that of the experts. RESULTS The accuracy of the model in the diagnosis of normal, CRVO, BRVO, and MRVO reached 1.000, 0.978, 0.957, and 0.978; the specificity reached 1.000, 0.986, 0.982, and 0.976; the sensitivity reached 1.000, 0.955, 0.917, and 1.000; the F1-Sore reached 1.000, 0.955 0.943, and 0.887 respectively. In addition, the area under curve of normal, CRVO, BRVO, and MRVO diagnosed by the diagnostic model were 1.000, 0.900, 0.959 and 0.970, respectively. The diagnostic results were highly consistent with those of fundus disease experts, and the diagnostic performance was superior. CONCLUSION The diagnostic model developed in this study can well diagnose different types of RVO, effectively relieve the work pressure of clinicians, and provide help for the follow-up clinical diagnosis and treatment of RVO patients.
Collapse
Affiliation(s)
- Yu-Ke Ji
- Eye Hospital, Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Rong-Rong Hua
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210000, Jiangsu Province, China
| | - Sha Liu
- Eye Hospital, Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Cui-Juan Xie
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518000, Guangdong Province, China
| | - Shao-Chong Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518000, Guangdong Province, China
| | - Wei-Hua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518000, Guangdong Province, China
| |
Collapse
|
9
|
Heger KA, Waldstein SM. Artificial intelligence in retinal imaging: current status and future prospects. Expert Rev Med Devices 2024; 21:73-89. [PMID: 38088362 DOI: 10.1080/17434440.2023.2294364] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/09/2023] [Indexed: 12/19/2023]
Abstract
INTRODUCTION The steadily growing and aging world population, in conjunction with continuously increasing prevalences of vision-threatening retinal diseases, is placing an increasing burden on the global healthcare system. The main challenges within retinology involve identifying the comparatively few patients requiring therapy within the large mass, the assurance of comprehensive screening for retinal disease and individualized therapy planning. In order to sustain high-quality ophthalmic care in the future, the incorporation of artificial intelligence (AI) technologies into our clinical practice represents a potential solution. AREAS COVERED This review sheds light onto already realized and promising future applications of AI techniques in retinal imaging. The main attention is directed at the application in diabetic retinopathy and age-related macular degeneration. The principles of use in disease screening, grading, therapeutic planning and prediction of future developments are explained based on the currently available literature. EXPERT OPINION The recent accomplishments of AI in retinal imaging indicate that its implementation into our daily practice is likely to fundamentally change the ophthalmic healthcare system and bring us one step closer to the goal of individualized treatment. However, it must be emphasized that the aim is to optimally support clinicians by gradually incorporating AI approaches, rather than replacing ophthalmologists.
Collapse
Affiliation(s)
- Katharina A Heger
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| | - Sebastian M Waldstein
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| |
Collapse
|
10
|
Liu YF, Ji YK, Fei FQ, Chen NM, Zhu ZT, Fei XZ. Research progress in artificial intelligence assisted diabetic retinopathy diagnosis. Int J Ophthalmol 2023; 16:1395-1405. [PMID: 37724288 PMCID: PMC10475636 DOI: 10.18240/ijo.2023.09.05] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/14/2023] [Indexed: 09/20/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the most common retinal vascular diseases and one of the main causes of blindness worldwide. Early detection and treatment can effectively delay vision decline and even blindness in patients with DR. In recent years, artificial intelligence (AI) models constructed by machine learning and deep learning (DL) algorithms have been widely used in ophthalmology research, especially in diagnosing and treating ophthalmic diseases, particularly DR. Regarding DR, AI has mainly been used in its diagnosis, grading, and lesion recognition and segmentation, and good research and application results have been achieved. This study summarizes the research progress in AI models based on machine learning and DL algorithms for DR diagnosis and discusses some limitations and challenges in AI research.
Collapse
Affiliation(s)
- Yun-Fang Liu
- Department of Ophthalmology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Yu-Ke Ji
- Eye Hospital, Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Fang-Qin Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Nai-Mei Chen
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Zhen-Tao Zhu
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Xing-Zhen Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| |
Collapse
|
11
|
Khan TM, Naqvi SS, Robles-Kelly A, Razzak I. Retinal vessel segmentation via a Multi-resolution Contextual Network and adversarial learning. Neural Netw 2023; 165:310-320. [PMID: 37327578 DOI: 10.1016/j.neunet.2023.05.029] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 04/24/2023] [Accepted: 05/17/2023] [Indexed: 06/18/2023]
Abstract
Timely and affordable computer-aided diagnosis of retinal diseases is pivotal in precluding blindness. Accurate retinal vessel segmentation plays an important role in disease progression and diagnosis of such vision-threatening diseases. To this end, we propose a Multi-resolution Contextual Network (MRC-Net) that addresses these issues by extracting multi-scale features to learn contextual dependencies between semantically different features and using bi-directional recurrent learning to model former-latter and latter-former dependencies. Another key idea is training in adversarial settings for foreground segmentation improvement through optimization of the region-based scores. This novel strategy boosts the performance of the segmentation network in terms of the Dice score (and correspondingly Jaccard index) while keeping the number of trainable parameters comparatively low. We have evaluated our method on three benchmark datasets, including DRIVE, STARE, and CHASE, demonstrating its superior performance as compared with competitive approaches elsewhere in the literature.
Collapse
Affiliation(s)
- Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Pakistan
| | - Antonio Robles-Kelly
- School of Information Technology, Faculty of Science Engineering & Built Environment, Deakin University, Locked Bag 20000, Geelong, Australia; Defence Science and Technology Group, 5111, Edinburgh, SA, Australia
| | - Imran Razzak
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
12
|
Aurangzeb K. A residual connection enabled deep neural network model for optic disk and optic cup segmentation for glaucoma diagnosis. Sci Prog 2023; 106:368504231201329. [PMID: 37743660 PMCID: PMC10521305 DOI: 10.1177/00368504231201329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Glaucoma diagnosis at an early stage is vital for the timely initiation of its treatment for and preventing possible vision loss. For glaucoma diagnosis, an accurate estimation of the cup-to-disk ratio (CDR) is required. The current automatic CDR computation techniques attribute lower accuracy and higher complexity, which are important considerations for diagnostics system design to be used for such critical diagnoses. The current methods involve a deeper deep learning model, comprising a large number of parameters, which results in higher system complexity and training/testing time. To address these challenges, this paper proposes a Residual Connection (non-identity)-based Deep Neural Network (RC-DNN), which is based on non-identity residual connectivity for joint optic disk (OD) and optic cup (OC) detection. The proposed model is emboldened by efficient residual connectivity, which is beneficial in several ways. First, the model is efficient and can perform simultaneous segmentation of the OC and OD. Second, the efficient residual information flow permeates the vanishing gradient problem which results in faster converges of the model. Third, feature inspiration empowers the network to perform the segmentation with only a few network layers. We performed a comprehensive performance evaluation of the developed model based on its training in RIM-ONE and DRISHTIGS databases. For OC segmentation, for the images (test set) from {DRISHTI-GS and RIM-ONE} datasets, our proposed model achieves the dice coefficient, Jaccard coefficient, sensitivity, specificity, and balanced accuracy of {92.62, 86.52}, {86.87, 77.54}, {94.21, 95.36}, {99.83, 99.639}, and {94.2, 98.9}, respectively. These experimental results indicate that the developed model provides significant performance enhancement for joint OC and OD segmentation. Additionally, the reduced computational complexity based on reduced model parameters and higher segmentation accuracy provides the additional features of efficacy, robustness, and reliability of the developed model. These attributes of the developed model advocate for its deployment of population-scale glaucoma screening programs.
Collapse
Affiliation(s)
- Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
13
|
Al-Halafi AM. Applications of artificial intelligence-assisted retinal imaging in systemic diseases: A literature review. Saudi J Ophthalmol 2023; 37:185-192. [PMID: 38074306 PMCID: PMC10701145 DOI: 10.4103/sjopt.sjopt_153_23] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 12/18/2024] Open
Abstract
The retina is a vulnerable structure that is frequently affected by different systemic conditions. The main mechanisms of systemic retinal damage are either primary insult of neurons of the retina, alterations of the local vasculature, or both. This vulnerability makes the retina an important window that reflects the severity of the preexisting systemic disorders. Therefore, current imaging techniques aim to identify early retinal changes relevant to systemic anomalies to establish anticipated diagnosis and start adequate management. Artificial intelligence (AI) has become among the highly trending technologies in the field of medicine. Its spread continues to extend to different specialties including ophthalmology. Many studies have shown the potential of this technique in assisting the screening of retinal anomalies in the context of systemic disorders. In this review, we performed extensive literature search to identify the most important studies that support the effectiveness of AI/deep learning use for diagnosing systemic disorders through retinal imaging. The utility of these technologies in the field of retina-based diagnosis of systemic conditions is highlighted.
Collapse
Affiliation(s)
- Ali M. Al-Halafi
- Department of Ophthalmology, Security Forces Hospital, Riyadh, Saudi Arabia
| |
Collapse
|
14
|
Sajid MZ, Qureshi I, Abbas Q, Albathan M, Shaheed K, Youssef A, Ferdous S, Hussain A. Mobile-HR: An Ophthalmologic-Based Classification System for Diagnosis of Hypertensive Retinopathy Using Optimized MobileNet Architecture. Diagnostics (Basel) 2023; 13:diagnostics13081439. [PMID: 37189539 DOI: 10.3390/diagnostics13081439] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/13/2023] [Accepted: 04/15/2023] [Indexed: 05/17/2023] Open
Abstract
Hypertensive retinopathy (HR) is a serious eye disease that causes the retinal arteries to change. This change is mainly due to the fact of high blood pressure. Cotton wool patches, bleeding in the retina, and retinal artery constriction are affected lesions of HR symptoms. An ophthalmologist often makes the diagnosis of eye-related diseases by analyzing fundus images to identify the stages and symptoms of HR. The likelihood of vision loss can significantly decrease the initial detection of HR. In the past, a few computer-aided diagnostics (CADx) systems were developed to automatically detect HR eye-related diseases using machine learning (ML) and deep learning (DL) techniques. Compared to ML methods, the CADx systems use DL techniques that require the setting of hyperparameters, domain expert knowledge, a huge training dataset, and a high learning rate. Those CADx systems have shown to be good for automating the extraction of complex features, but they cause problems with class imbalance and overfitting. By ignoring the issues of a small dataset of HR, a high level of computational complexity, and the lack of lightweight feature descriptors, state-of-the-art efforts depend on performance enhancement. In this study, a pretrained transfer learning (TL)-based MobileNet architecture is developed by integrating dense blocks to optimize the network for the diagnosis of HR eye-related disease. We developed a lightweight HR-related eye disease diagnosis system, known as Mobile-HR, by integrating a pretrained model and dense blocks. To increase the size of the training and test datasets, we applied a data augmentation technique. The outcomes of the experiments show that the suggested approach was outperformed in many cases. This Mobile-HR system achieved an accuracy of 99% and an F1 score of 0.99 on different datasets. The results were verified by an expert ophthalmologist. These results indicate that the Mobile-HR CADx model produces positive outcomes and outperforms state-of-the-art HR systems in terms of accuracy.
Collapse
Affiliation(s)
- Muhammad Zaheer Sajid
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan
| | - Imran Qureshi
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Mubarak Albathan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Kashif Shaheed
- Department of Multimedia Systems, Faculty of Electronics, Telecommunication and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Ayman Youssef
- Department of Computers and Systems, Electronics Research Institute, Cairo 12622, Egypt
| | - Sehrish Ferdous
- Department of Software Engineering, National University of Modern Languages, Rawalpindi 44000, Pakistan
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan
| |
Collapse
|
15
|
Islam MT, Khan HA, Naveed K, Nauman A, Gulfam SM, Kim SW. LUVS-Net: A Lightweight U-Net Vessel Segmentor for Retinal Vasculature Detection in Fundus Images. ELECTRONICS 2023; 12:1786. [DOI: 10.3390/electronics12081786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
This paper presents LUVS-Net, which is a lightweight convolutional network for retinal vessel segmentation in fundus images that is designed for resource-constrained devices that are typically unable to meet the computational requirements of large neural networks. The computational challenges arise due to low-quality retinal images, wide variance in image acquisition conditions and disparities in intensity. Consequently, the training of existing segmentation methods requires a multitude of trainable parameters for the training of networks, resulting in computational complexity. The proposed Lightweight U-Net for Vessel Segmentation Network (LUVS-Net) can achieve high segmentation performance with only a few trainable parameters. This network uses an encoder–decoder framework in which edge data are transposed from the first layers of the encoder to the last layer of the decoder, massively improving the convergence latency. Additionally, LUVS-Net’s design allows for a dual-stream information flow both inside as well as outside of the encoder–decoder pair. The network width is enhanced using group convolutions, which allow the network to learn a larger number of low- and intermediate-level features. Spatial information loss is minimized using skip connections, and class imbalances are mitigated using dice loss for pixel-wise classification. The performance of the proposed network is evaluated on the publicly available retinal blood vessel datasets DRIVE, CHASE_DB1 and STARE. LUVS-Net proves to be quite competitive, outperforming alternative state-of-the-art segmentation methods and achieving comparable accuracy using trainable parameters that are reduced by two to three orders of magnitude compared with those of comparative state-of-the-art methods.
Collapse
Affiliation(s)
- Muhammad Talha Islam
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Haroon Ahmed Khan
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
- Department of Electrical and Computer Engineering, Aarhus University, 8000 Aarhus, Denmark
| | - Ali Nauman
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| | - Sardar Muhammad Gulfam
- Department of Electrical and Computer Engineering, Abbottabad Campus, COMSATS University Islamabad (CUI), Abbottabad 22060, Pakistan
| | - Sung Won Kim
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| |
Collapse
|
16
|
Sun K, Chen Y, Chao Y, Geng J, Chen Y. A retinal vessel segmentation method based improved U-Net model. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
17
|
Ji Y, Ji Y, Liu Y, Zhao Y, Zhang L. Research progress on diagnosing retinal vascular diseases based on artificial intelligence and fundus images. Front Cell Dev Biol 2023; 11:1168327. [PMID: 37056999 PMCID: PMC10086262 DOI: 10.3389/fcell.2023.1168327] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023] Open
Abstract
As the only blood vessels that can directly be seen in the whole body, pathological changes in retinal vessels are related to the metabolic state of the whole body and many systems, which seriously affect the vision and quality of life of patients. Timely diagnosis and treatment are key to improving vision prognosis. In recent years, with the rapid development of artificial intelligence, the application of artificial intelligence in ophthalmology has become increasingly extensive and in-depth, especially in the field of retinal vascular diseases. Research study results based on artificial intelligence and fundus images are remarkable and provides a great possibility for early diagnosis and treatment. This paper reviews the recent research progress on artificial intelligence in retinal vascular diseases (including diabetic retinopathy, hypertensive retinopathy, retinal vein occlusion, retinopathy of prematurity, and age-related macular degeneration). The limitations and challenges of the research process are also discussed.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Yun Ji
- Affiliated Hospital of Shandong University of traditional Chinese Medicine, Jinan, Shandong, China
| | - Yunfang Liu
- Department of Ophthalmology, The First People’s Hospital of Huzhou, Huzhou, Zhejiang, China
| | - Ying Zhao
- Affiliated Hospital of Shandong University of traditional Chinese Medicine, Jinan, Shandong, China
- *Correspondence: Liya Zhang, ; Ying Zhao,
| | - Liya Zhang
- Department of Ophthalmology, The First People’s Hospital of Huzhou, Huzhou, Zhejiang, China
- *Correspondence: Liya Zhang, ; Ying Zhao,
| |
Collapse
|
18
|
Arsalan M, Khan TM, Naqvi SS, Nawaz M, Razzak I. Prompt Deep Light-Weight Vessel Segmentation Network (PLVS-Net). IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:1363-1371. [PMID: 36194721 DOI: 10.1109/tcbb.2022.3211936] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Achieving accurate retinal vessel segmentation is critical in the progression and diagnosis of vision-threatening diseases such as diabetic retinopathy and age-related macular degeneration. Existing vessel segmentation methods are based on encoder-decoder architectures, which frequently fail to take into account the retinal vessel structure's context in their analysis. As a result, such methods have difficulty bridging the semantic gap between encoder and decoder characteristics. This paper proposes a Prompt Deep Light-weight Vessel Segmentation Network (PLVS-Net) to address these issues by using prompt blocks. Each prompt block use combination of asymmetric kernel convolutions, depth-wise separable convolutions, and ordinary convolutions to extract useful features. This novel strategy improves the performance of the segmentation network while simultaneously decreasing the number of trainable parameters. Our method outperformed competing approaches in the literature on three benchmark datasets, including DRIVE, STARE, and CHASE.
Collapse
|
19
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
20
|
Leveraging image complexity in macro-level neural network design for medical image segmentation. Sci Rep 2022; 12:22286. [PMID: 36566313 PMCID: PMC9790020 DOI: 10.1038/s41598-022-26482-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 12/15/2022] [Indexed: 12/25/2022] Open
Abstract
Recent progress in encoder-decoder neural network architecture design has led to significant performance improvements in a wide range of medical image segmentation tasks. However, state-of-the-art networks for a given task may be too computationally demanding to run on affordable hardware, and thus users often resort to practical workarounds by modifying various macro-level design aspects. Two common examples are downsampling of the input images and reducing the network depth or size to meet computer memory constraints. In this paper, we investigate the effects of these changes on segmentation performance and show that image complexity can be used as a guideline in choosing what is best for a given dataset. We consider four statistical measures to quantify image complexity and evaluate their suitability on ten different public datasets. For the purpose of our illustrative experiments, we use DeepLabV3+ (deep large-size), M2U-Net (deep lightweight), U-Net (shallow large-size), and U-Net Lite (shallow lightweight). Our results suggest that median frequency is the best complexity measure when deciding on an acceptable input downsampling factor and using a deep versus shallow, large-size versus lightweight network. For high-complexity datasets, a lightweight network running on the original images may yield better segmentation results than a large-size network running on downsampled images, whereas the opposite may be the case for low-complexity images.
Collapse
|
21
|
Outlier Based Skimpy Regularization Fuzzy Clustering Algorithm for Diabetic Retinopathy Image Segmentation. Symmetry (Basel) 2022. [DOI: 10.3390/sym14122512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Blood vessels are harmed in diabetic retinopathy (DR), a condition that impairs vision. Using modern healthcare research and technology, artificial intelligence and processing units are used to aid in the diagnosis of this syndrome and the study of diagnostic procedures. The correct assessment of DR severity requires the segmentation of lesions from fundus pictures. The manual grading method becomes highly difficult and time-consuming due to the wide range of the morphologies, number, and sizes of lesions. For image segmentation, traditional fuzzy clustering techniques have two major drawbacks. First, fuzzy memberships based clustering are more susceptible to outliers. Second, because of the lack of local spatial information, these techniques often result in oversegmentation of images. In order to address these issues, this research study proposes an outlier-based skimpy regularization fuzzy clustering technique (OSR-FCA) for image segmentation. Clustering methods that use fuzzy membership with sparseness can be improved by incorporating a Gaussian metric regularisation into the objective function. The proposed study used the symmetry information contained in the image data to conduct the image segmentation using the fuzzy clustering technique while avoiding over segmenting relevant data. This resulted in a reduced proportion of noisy data and better clustering results. The classification was carried out by a deep learning technique called convolutional neural network (CNN). Two publicly available datasets were used for the validation process by using different metrics. The experimental results showed that the proposed segmentation technique achieved 97.16% and classification technique achieved 97.26% of accuracy on the MESSIDOR dataset.
Collapse
|
22
|
A Novel Deep Learning-Based Mitosis Recognition Approach and Dataset for Uterine Leiomyosarcoma Histopathology. Cancers (Basel) 2022; 14:cancers14153785. [PMID: 35954449 PMCID: PMC9367529 DOI: 10.3390/cancers14153785] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 07/24/2022] [Accepted: 08/01/2022] [Indexed: 11/17/2022] Open
Abstract
Uterine leiomyosarcoma (ULMS) is the most common sarcoma of the uterus, with both a high malignant potential and poor prognosis. Its diagnosis is sometimes challenging owing to its resemblance to leiomyosarcoma, often being accompanied by benign smooth muscle neoplasms of the uterus. Pathologists diagnose and grade leiomyosarcoma based on three biomarkers (i.e., mitosis count, necrosis, and nuclear atypia). Among these biomarkers, mitosis count is the most important and challenging biomarker. In general, pathologists use the traditional manual counting method for the detection and counting of mitosis. This procedure is very time-consuming, tedious, and subjective. To overcome these challenges, artificial intelligence (AI) based methods have been developed that automatically detect mitosis. In this paper, we propose a new ULMS dataset and an AI-based approach for mitosis detection. We collected our dataset from a local medical facility in collaboration with highly trained pathologists. Preprocessing and annotations are performed using standard procedures, and a deep learning-based method is applied to provide baseline accuracies. The experimental results showed 0.7462 precision, 0.8981 recall, and 0.8151 F1-score. For research and development, the code and dataset have been made publicly available.
Collapse
|
23
|
Alexopoulos P, Madu C, Wollstein G, Schuman JS. The Development and Clinical Application of Innovative Optical Ophthalmic Imaging Techniques. Front Med (Lausanne) 2022; 9:891369. [PMID: 35847772 PMCID: PMC9279625 DOI: 10.3389/fmed.2022.891369] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/23/2022] [Indexed: 11/22/2022] Open
Abstract
The field of ophthalmic imaging has grown substantially over the last years. Massive improvements in image processing and computer hardware have allowed the emergence of multiple imaging techniques of the eye that can transform patient care. The purpose of this review is to describe the most recent advances in eye imaging and explain how new technologies and imaging methods can be utilized in a clinical setting. The introduction of optical coherence tomography (OCT) was a revolution in eye imaging and has since become the standard of care for a plethora of conditions. Its most recent iterations, OCT angiography, and visible light OCT, as well as imaging modalities, such as fluorescent lifetime imaging ophthalmoscopy, would allow a more thorough evaluation of patients and provide additional information on disease processes. Toward that goal, the application of adaptive optics (AO) and full-field scanning to a variety of eye imaging techniques has further allowed the histologic study of single cells in the retina and anterior segment. Toward the goal of remote eye care and more accessible eye imaging, methods such as handheld OCT devices and imaging through smartphones, have emerged. Finally, incorporating artificial intelligence (AI) in eye images has the potential to become a new milestone for eye imaging while also contributing in social aspects of eye care.
Collapse
Affiliation(s)
- Palaiologos Alexopoulos
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Chisom Madu
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
| | - Joel S. Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
| |
Collapse
|
24
|
Ji Y, Chen N, Liu S, Yan Z, Qian H, Zhu S, Zhang J, Wang M, Jiang Q, Yang W. Research Progress of Artificial Intelligence Image Analysis in Systemic Disease-Related Ophthalmopathy. DISEASE MARKERS 2022; 2022:3406890. [PMID: 35783011 PMCID: PMC9249504 DOI: 10.1155/2022/3406890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 06/09/2022] [Indexed: 11/28/2022]
Abstract
The eye is one of the most important organs of the human body. Eye diseases are closely related to other systemic diseases, both of which influence each other. Numerous systemic diseases lead to special clinical manifestations and complications in the eyes. Typical diseases include diabetic retinopathy, hypertensive retinopathy, thyroid associated ophthalmopathy, optic neuromyelitis, and Behcet's disease. Systemic disease-related ophthalmopathy is usually a chronic disease, and the analysis of imaging markers is helpful for a comprehensive diagnosis of these diseases. Recently, artificial intelligence (AI) technology based on deep learning has rapidly developed, leading to numerous achievements and arousing widespread concern. Presently, AI technology has made significant progress in research on imaging markers of systemic disease-related ophthalmopathy; however, there are also many limitations and challenges. This article reviews the research achievements, limitations, and future prospects of AI image analysis technology in systemic disease-related ophthalmopathy.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Nan Chen
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Sha Liu
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Zhipeng Yan
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Hui Qian
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Jie Zhang
- Advanced Ophthalmology Laboratory (AOL), Robotrak Technologies, Nanjing, China
| | - Minli Wang
- First Affiliated Hospital of Huzhou University, Huzhou, China
| | - Qin Jiang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| |
Collapse
|
25
|
Artificial Intelligence-Based Tissue Phenotyping in Colorectal Cancer Histopathology Using Visual and Semantic Features Aggregation. MATHEMATICS 2022. [DOI: 10.3390/math10111909] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Tissue phenotyping of the tumor microenvironment has a decisive role in digital profiling of intra-tumor heterogeneity, epigenetics, and progression of cancer. Most of the existing methods for tissue phenotyping often rely on time-consuming and error-prone manual procedures. Recently, with the advent of advanced technologies, these procedures have been automated using artificial intelligence techniques. In this paper, a novel deep histology heterogeneous feature aggregation network (HHFA-Net) is proposed based on visual and semantic information fusion for the detection of tissue phenotypes in colorectal cancer (CRC). We adopted and tested various data augmentation techniques to avoid computationally expensive stain normalization procedures and handle limited and imbalanced data problems. Three publicly available datasets are used in the experiments: CRC tissue phenotyping (CRC-TP), CRC histology (CRCH), and colon cancer histology (CCH). The proposed HHFA-Net achieves higher accuracies than the state-of-the-art methods for tissue phenotyping in CRC histopathology images.
Collapse
|
26
|
Haider A, Arsalan M, Lee YW, Park KR. Deep features aggregation-based joint segmentation of cytoplasm and nuclei in white blood cells. IEEE J Biomed Health Inform 2022; 26:3685-3696. [PMID: 35635825 DOI: 10.1109/jbhi.2022.3178765] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
White blood cells (WBCs), also known as leukocytes, are one of the valuable parts of the blood and immune system. Typically, pathologists use microscope for the manual inspection of blood smears which is a time-consuming, error-prone, and labor-intensive procedure. To address these issues, we present two novel shallow networks: a leukocyte deep segmentation network (LDS-Net) and leukocyte deep aggregation segmentation network (LDAS-Net) for the joint segmentation of cytoplasm and nuclei in WBC images. LDS-Net is a shallow architecture with three downsampling stages and seven convolution layers. LDAS-Net is an extended version of LDS-Net that utilizes a novel pool-less low-level information transfer bridge to transfer low-level information to the deep layers of the network. This information is aggregated with deep features in a dense feature concatenation block to achieve accurate cytoplasm and nuclei joint segmentation. We evaluated our developed architectures on four WBC publicly available datasets. For cytoplasmic segmentation in WBCs, the proposed method achieved the dice coefficients of 98.97%, 99.0%, 96.05%, and 98.79% on Datasets 1, 2, 3, and 4, respectively. For nuclei segmentation, the dice coefficients of 96.35% and 98.09% are achieved for Datasets 1 and 2, respectively. Proposed method outperforms state-of-the-art methods with superior computational efficiency and requires only 6.5 million trainable parameters.
Collapse
|
27
|
Segmenting Retinal Vessels Using a Shallow Segmentation Network to Aid Ophthalmic Analysis. MATHEMATICS 2022. [DOI: 10.3390/math10091536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Retinal blood vessels possess a complex structure in the retina and are considered an important biomarker for several retinal diseases. Ophthalmic diseases result in specific changes in the retinal vasculature; for example, diabetic retinopathy causes the retinal vessels to swell, and depending upon disease severity, fluid or blood can leak. Similarly, hypertensive retinopathy causes a change in the retinal vasculature due to the thinning of these vessels. Central retinal vein occlusion (CRVO) is a phenomenon in which the main vein causes drainage of the blood from the retina and this main vein can close completely or partially with symptoms of blurred vision and similar eye problems. Considering the importance of the retinal vasculature as an ophthalmic disease biomarker, ophthalmologists manually analyze retinal vascular changes. Manual analysis is a tedious task that requires constant observation to detect changes. The deep learning-based methods can ease the problem by learning from the annotations provided by an expert ophthalmologist. However, current deep learning-based methods are relatively inaccurate, computationally expensive, complex, and require image preprocessing for final detection. Moreover, existing methods are unable to provide a better true positive rate (sensitivity), which shows that the model can predict most of the vessel pixels. Therefore, this study presents the so-called vessel segmentation ultra-lite network (VSUL-Net) to accurately extract the retinal vasculature from the background. The proposed VSUL-Net comprises only 0.37 million trainable parameters and uses an original image as input without preprocessing. The VSUL-Net uses a retention block that specifically maintains the larger feature map size and low-level spatial information transfer. This retention block results in better sensitivity of the proposed VSUL-Net without using expensive preprocessing schemes. The proposed method was tested on three publicly available datasets: digital retinal images for vessel extraction (DRIVE), structured analysis of retina (STARE), and children’s heart health study in England database (CHASE-DB1) for retinal vasculature segmentation. The experimental results demonstrated that VSUL-Net provides robust segmentation of retinal vasculature with sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under the curve (AUC) values of 83.80%, 98.21%, 96.95%, and 98.54%, respectively, for DRIVE, 81.73%, 98.35%, 97.17%, and 98.69%, respectively, for CHASE-DB1, and 86.64%, 98.13%, 97.27%, and 99.01%, respectively, for STARE datasets. The proposed method provides an accurate segmentation mask for deep ophthalmic analysis.
Collapse
|
28
|
Panda SK, Cheong H, Tun TA, Chuangsuwanich T, Kadziauskiene A, Senthil V, Krishnadas R, Buist ML, Perera S, Cheng CY, Aung T, Thiery AH, Girard MJ. The three-dimensional structural configuration of the central retinal vessel trunk and branches as a glaucoma biomarker. Am J Ophthalmol 2022; 240:205-216. [PMID: 35247336 DOI: 10.1016/j.ajo.2022.02.020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 02/16/2022] [Accepted: 02/16/2022] [Indexed: 11/01/2022]
Abstract
PURPOSE To assess whether the 3-dimensional (3D) structural configuration of the central retinal vessel trunk and its branches (CRVT&B) could be used as a diagnostic marker for glaucoma. DESIGN Retrospective, deep-learning approach diagnosis study. METHODS We trained a deep learning network to automatically segment the CRVT&B from the B-scans of the optical coherence tomography (OCT) volume of the optic nerve head. Subsequently, 2 different approaches were used for glaucoma diagnosis using the structural configuration of the CRVT&B as extracted from the OCT volumes. In the first approach, we aimed to provide a diagnosis using only 3D convolutional neural networks and the 3D structure of the CRVT&B. For the second approach, we projected the 3D structure of the CRVT&B orthographically onto sagittal, frontal, and transverse planes to obtain 3 two-dimensional (2D) images, and then a 2D convolutional neural network was used for diagnosis. The segmentation accuracy was evaluated using the Dice coefficient, whereas the diagnostic accuracy was assessed using the area under the receiver operating characteristic curves (AUCs). The diagnostic performance of the CRVT&B was also compared with that of retinal nerve fiber layer (RNFL) thickness (calculated in the same cohorts). RESULTS Our segmentation network was able to efficiently segment retinal blood vessels from OCT scans. On a test set, we achieved a Dice coefficient of 0.81 ± 0.07. The 3D and 2D diagnostic networks were able to differentiate glaucoma from nonglaucoma subjects with accuracies of 82.7% and 83.3%, respectively. The corresponding AUCs for the CRVT&B were 0.89 and 0.90, higher than those obtained with RNFL thickness alone (AUCs ranging from 0.74 to 0.80). CONCLUSIONS Our work demonstrated that the diagnostic power of the CRVT&B is superior to that of a gold-standard glaucoma parameter, that is, RNFL thickness. Our work also suggested that the major retinal blood vessels form a "skeleton"-the configuration of which may be representative of major optic nerve head structural changes as typically observed with the development and progression of glaucoma.
Collapse
|
29
|
Arsalan M, Haider A, Choi J, Park KR. Detecting Blastocyst Components by Artificial Intelligence for Human Embryological Analysis to Improve Success Rate of In Vitro Fertilization. J Pers Med 2022; 12:jpm12020124. [PMID: 35207617 PMCID: PMC8877842 DOI: 10.3390/jpm12020124] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/06/2022] [Accepted: 01/13/2022] [Indexed: 01/06/2023] Open
Abstract
Morphological attributes of human blastocyst components and their characteristics are highly correlated with the success rate of in vitro fertilization (IVF). Blastocyst component analysis aims to choose the most viable embryos to improve the success rate of IVF. The embryologist evaluates blastocyst viability by manual microscopic assessment of its components, such as zona pellucida (ZP), trophectoderm (TE), blastocoel (BL), and inner cell mass (ICM). With the success of deep learning in the medical diagnosis domain, semantic segmentation has the potential to detect crucial components of human blastocysts for computerized analysis. In this study, a sprint semantic segmentation network (SSS-Net) is proposed to accurately detect blastocyst components for embryological analysis. The proposed method is based on a fully convolutional semantic segmentation scheme that provides the pixel-wise classification of important blastocyst components that help to automatically check the morphologies of these elements. The proposed SSS-Net uses the sprint convolutional block (SCB), which uses asymmetric kernel convolutions in combination with depth-wise separable convolutions to reduce the overall cost of the network. SSS-Net is a shallow architecture with dense feature aggregation, which helps in better segmentation. The proposed SSS-Net consumes a smaller number of trainable parameters (4.04 million) compared to state-of-the-art methods. The SSS-Net was evaluated using a publicly available human blastocyst image dataset for component segmentation. The experimental results confirm that our proposal provides promising segmentation performance with a Jaccard Index of 82.88%, 77.40%, 88.39%, 84.94%, and 96.03% for ZP, TE, BL, ICM, and background, with residual connectivity, respectively. It is also provides a Jaccard Index of 84.51%, 78.15%, 88.68%, 84.50%, and 95.82% for ZP, TE, BL, ICM, and background, with dense connectivity, respectively. The proposed SSS-Net is providing a mean Jaccard Index (Mean JI) of 85.93% and 86.34% with residual and dense connectivity, respectively; this shows effective segmentation of blastocyst components for embryological analysis.
Collapse
|
30
|
Kumar Y, Koul A, Singla R, Ijaz MF. Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 14:8459-8486. [PMID: 35039756 PMCID: PMC8754556 DOI: 10.1007/s12652-021-03612-z] [Citation(s) in RCA: 201] [Impact Index Per Article: 67.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 11/18/2021] [Indexed: 05/03/2023]
Abstract
Artificial intelligence can assist providers in a variety of patient care and intelligent health systems. Artificial intelligence techniques ranging from machine learning to deep learning are prevalent in healthcare for disease diagnosis, drug discovery, and patient risk identification. Numerous medical data sources are required to perfectly diagnose diseases using artificial intelligence techniques, such as ultrasound, magnetic resonance imaging, mammography, genomics, computed tomography scan, etc. Furthermore, artificial intelligence primarily enhanced the infirmary experience and sped up preparing patients to continue their rehabilitation at home. This article covers the comprehensive survey based on artificial intelligence techniques to diagnose numerous diseases such as Alzheimer, cancer, diabetes, chronic heart disease, tuberculosis, stroke and cerebrovascular, hypertension, skin, and liver disease. We conducted an extensive survey including the used medical imaging dataset and their feature extraction and classification process for predictions. Preferred reporting items for systematic reviews and Meta-Analysis guidelines are used to select the articles published up to October 2020 on the Web of Science, Scopus, Google Scholar, PubMed, Excerpta Medical Database, and Psychology Information for early prediction of distinct kinds of diseases using artificial intelligence-based techniques. Based on the study of different articles on disease diagnosis, the results are also compared using various quality parameters such as prediction rate, accuracy, sensitivity, specificity, the area under curve precision, recall, and F1-score.
Collapse
Affiliation(s)
- Yogesh Kumar
- Department of Computer Engineering, Indus Institute of Technology and Engineering, Indus University, Ahmedabad, 382115 India
| | | | - Ruchi Singla
- Department of Research, Innovations, Sponsored Projects and Entrepreneurship, CGC Landran, Mohali, India
| | - Muhammad Fazal Ijaz
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, 05006 South Korea
| |
Collapse
|
31
|
DAVS-NET: Dense Aggregation Vessel Segmentation Network for retinal vasculature detection in fundus images. PLoS One 2022; 16:e0261698. [PMID: 34972109 PMCID: PMC8719769 DOI: 10.1371/journal.pone.0261698] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 12/07/2021] [Indexed: 12/26/2022] Open
Abstract
In this era, deep learning-based medical image analysis has become a reliable source in assisting medical practitioners for various retinal disease diagnosis like hypertension, diabetic retinopathy (DR), arteriosclerosis glaucoma, and macular edema etc. Among these retinal diseases, DR can lead to vision detachment in diabetic patients which cause swelling of these retinal blood vessels or even can create new vessels. This creation or the new vessels and swelling can be analyzed as biomarker for screening and analysis of DR. Deep learning-based semantic segmentation of these vessels can be an effective tool to detect changes in retinal vasculature for diagnostic purposes. This segmentation task becomes challenging because of the low-quality retinal images with different image acquisition conditions, and intensity variations. Existing retinal blood vessels segmentation methods require a large number of trainable parameters for training of their networks. This paper introduces a novel Dense Aggregation Vessel Segmentation Network (DAVS-Net), which can achieve high segmentation performance with only a few trainable parameters. For faster convergence, this network uses an encoder-decoder framework in which edge information is transferred from the first layers of the encoder to the last layer of the decoder. Performance of the proposed network is evaluated on publicly available retinal blood vessels datasets of DRIVE, CHASE_DB1, and STARE. Proposed method achieved state-of-the-art segmentation accuracy using a few number of trainable parameters.
Collapse
|
32
|
Arsalan M, Haider A, Choi J, Park KR. Diabetic and Hypertensive Retinopathy Screening in Fundus Images Using Artificially Intelligent Shallow Architectures. J Pers Med 2021; 12:jpm12010007. [PMID: 35055322 PMCID: PMC8777982 DOI: 10.3390/jpm12010007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/20/2021] [Accepted: 12/20/2021] [Indexed: 12/25/2022] Open
Abstract
Retinal blood vessels are considered valuable biomarkers for the detection of diabetic retinopathy, hypertensive retinopathy, and other retinal disorders. Ophthalmologists analyze retinal vasculature by manual segmentation, which is a tedious task. Numerous studies have focused on automatic retinal vasculature segmentation using different methods for ophthalmic disease analysis. However, most of these methods are computationally expensive and lack robustness. This paper proposes two new shallow deep learning architectures: dual-stream fusion network (DSF-Net) and dual-stream aggregation network (DSA-Net) to accurately detect retinal vasculature. The proposed method uses semantic segmentation in raw color fundus images for the screening of diabetic and hypertensive retinopathies. The proposed method's performance is assessed using three publicly available fundus image datasets: Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of Retina (STARE), and Children Heart Health Study in England Database (CHASE-DB1). The experimental results revealed that the proposed method provided superior segmentation performance with accuracy (Acc), sensitivity (SE), specificity (SP), and area under the curve (AUC) of 96.93%, 82.68%, 98.30%, and 98.42% for DRIVE, 97.25%, 82.22%, 98.38%, and 98.15% for CHASE-DB1, and 97.00%, 86.07%, 98.00%, and 98.65% for STARE datasets, respectively. The experimental results also show that the proposed DSA-Net provides higher SE compared to the existing approaches. It means that the proposed method detected the minor vessels and provided the least false negatives, which is extremely important for diagnosis. The proposed method provides an automatic and accurate segmentation mask that can be used to highlight the vessel pixels. This detected vasculature can be utilized to compute the ratio between the vessel and the non-vessel pixels and distinguish between diabetic and hypertensive retinopathies, and morphology can be analyzed for related retinal disorders.
Collapse
|
33
|
Dashdondov K, Kim MH. Mahalanobis Distance Based Multivariate Outlier Detection to Improve Performance of Hypertension Prediction. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10663-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
34
|
Abbas Q, Qureshi I, Ibrahim MEA. An Automatic Detection and Classification System of Five Stages for Hypertensive Retinopathy Using Semantic and Instance Segmentation in DenseNet Architecture. SENSORS (BASEL, SWITZERLAND) 2021; 21:6936. [PMID: 34696149 PMCID: PMC8538561 DOI: 10.3390/s21206936] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 10/13/2021] [Accepted: 10/15/2021] [Indexed: 12/23/2022]
Abstract
The stage and duration of hypertension are connected to the occurrence of Hypertensive Retinopathy (HR) of eye disease. Currently, a few computerized systems have been developed to recognize HR by using only two stages. It is difficult to define specialized features to recognize five grades of HR. In addition, deep features have been used in the past, but the classification accuracy is not up-to-the-mark. In this research, a new hypertensive retinopathy (HYPER-RETINO) framework is developed to grade the HR based on five grades. The HYPER-RETINO system is implemented based on pre-trained HR-related lesions. To develop this HYPER-RETINO system, several steps are implemented such as a preprocessing, the detection of HR-related lesions by semantic and instance-based segmentation and a DenseNet architecture to classify the stages of HR. Overall, the HYPER-RETINO system determined the local regions within input retinal fundus images to recognize five grades of HR. On average, a 10-fold cross-validation test obtained sensitivity (SE) of 90.5%, specificity (SP) of 91.5%, accuracy (ACC) of 92.6%, precision (PR) of 91.7%, Matthews correlation coefficient (MCC) of 61%, F1-score of 92% and area-under-the-curve (AUC) of 0.915 on 1400 HR images. Thus, the applicability of the HYPER-RETINO method to reliably diagnose stages of HR is verified by experimental findings.
Collapse
Affiliation(s)
- Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| | - Imran Qureshi
- Department of Computer Software Engineering, Military College of Signals, National University of Sciences and Technology (MCS-NUST), Islamabad 44000, Pakistan;
| | - Mostafa E. A. Ibrahim
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
- Department of Electrical Engineering, Benha Faculty of Engineering, Benha University, Qalubia, Benha 13518, Egypt
| |
Collapse
|
35
|
Valizadeh A, Jafarzadeh Ghoushchi S, Ranjbarzadeh R, Pourasad Y. Presentation of a Segmentation Method for a Diabetic Retinopathy Patient's Fundus Region Detection Using a Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:7714351. [PMID: 34354746 PMCID: PMC8331281 DOI: 10.1155/2021/7714351] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 06/30/2021] [Accepted: 07/18/2021] [Indexed: 01/16/2023]
Abstract
Diabetic retinopathy is characteristic of a local distribution that involves early-stage risk factors and can forecast the evolution of the illness or morphological lesions related to the abnormality of retinal blood flows. Regional variations in retinal blood flow and modulation of retinal capillary width in the macular area and the retinal environment are also linked to the course of diabetic retinopathy. Despite the fact that diabetic retinopathy is frequent nowadays, it is hard to avoid. An ophthalmologist generally determines the seriousness of the retinopathy of the eye by directly examining color photos and evaluating them by visually inspecting the fundus. It is an expensive process because of the vast number of diabetic patients around the globe. We used the IDRiD data set that contains both typical diabetic retinopathic lesions and normal retinal structures. We provided a CNN architecture for the detection of the target region of 80 patients' fundus imagery. Results demonstrate that the approach described here can nearly detect 83.84% of target locations. This result can potentially be utilized to monitor and regulate patients.
Collapse
Affiliation(s)
- Amin Valizadeh
- Department of Mechanical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Saeid Jafarzadeh Ghoushchi
- Department of Industrial Engineering, Urmia University of Technology (UUT), P.O. Box 57166-419, Urmia, Iran
| | - Ramin Ranjbarzadeh
- Department of Telecommunications Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran
| | - Yaghoub Pourasad
- Department of Electrical Engineering, Urmia University of Technology (UUT), P.O. Box 57166-419, Urmia, Iran
| |
Collapse
|
36
|
Avilés-Rodríguez GJ, Nieto-Hipólito JI, Cosío-León MDLÁ, Romo-Cárdenas GS, Sánchez-López JDD, Radilla-Chávez P, Vázquez-Briseño M. Topological Data Analysis for Eye Fundus Image Quality Assessment. Diagnostics (Basel) 2021; 11:1322. [PMID: 34441257 PMCID: PMC8394537 DOI: 10.3390/diagnostics11081322] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 11/29/2022] Open
Abstract
The objective of this work is to perform image quality assessment (IQA) of eye fundus images in the context of digital fundoscopy with topological data analysis (TDA) and machine learning methods. Eye health remains inaccessible for a large amount of the global population. Digital tools that automize the eye exam could be used to address this issue. IQA is a fundamental step in digital fundoscopy for clinical applications; it is one of the first steps in the preprocessing stages of computer-aided diagnosis (CAD) systems using eye fundus images. Images from the EyePACS dataset were used, and quality labels from previous works in the literature were selected. Cubical complexes were used to represent the images; the grayscale version was, then, used to calculate a persistent homology on the simplex and represented with persistence diagrams. Then, 30 vectorized topological descriptors were calculated from each image and used as input to a classification algorithm. Six different algorithms were tested for this study (SVM, decision tree, k-NN, random forest, logistic regression (LoGit), MLP). LoGit was selected and used for the classification of all images, given the low computational cost it carries. Performance results on the validation subset showed a global accuracy of 0.932, precision of 0.912 for label "quality" and 0.952 for label "no quality", recall of 0.932 for label "quality" and 0.912 for label "no quality", AUC of 0.980, F1 score of 0.932, and a Matthews correlation coefficient of 0.864. This work offers evidence for the use of topological methods for the process of quality assessment of eye fundus images, where a relatively small vector of characteristics (30 in this case) can enclose enough information for an algorithm to yield classification results useful in the clinical settings of a digital fundoscopy pipeline for CAD.
Collapse
Affiliation(s)
- Gener José Avilés-Rodríguez
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Juan Iván Nieto-Hipólito
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - María de los Ángeles Cosío-León
- Dirección de Investigación, Innovación y Posgrado, Universidad Politécnica de Pachuca, Carretera Ciudad Sahagún-Pachuca Km. 20, Ex-Hacienda de Santa Bárbara, Hidalgo 43830, Mexico;
| | - Gerardo Salvador Romo-Cárdenas
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Juan de Dios Sánchez-López
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Patricia Radilla-Chávez
- Escuela de Ciencias de la Salud, Universidad Autónoma de Baja California, Carretera Transpeninsular S/N, Valle Dorado, Ensenada 22890, Mexico;
| | - Mabel Vázquez-Briseño
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| |
Collapse
|
37
|
Sultan H, Owais M, Park C, Mahmood T, Haider A, Park KR. Artificial Intelligence-Based Recognition of Different Types of Shoulder Implants in X-ray Scans Based on Dense Residual Ensemble-Network for Personalized Medicine. J Pers Med 2021; 11:482. [PMID: 34072079 PMCID: PMC8229063 DOI: 10.3390/jpm11060482] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 05/15/2021] [Accepted: 05/24/2021] [Indexed: 01/10/2023] Open
Abstract
Re-operations and revisions are often performed in patients who have undergone total shoulder arthroplasty (TSA) and reverse total shoulder arthroplasty (RTSA). This necessitates an accurate recognition of the implant model and manufacturer to set the correct apparatus and procedure according to the patient's anatomy as personalized medicine. Owing to unavailability and ambiguity in the medical data of a patient, expert surgeons identify the implants through a visual comparison of X-ray images. False steps cause heedlessness, morbidity, extra monetary weight, and a waste of time. Despite significant advancements in pattern recognition and deep learning in the medical field, extremely limited research has been conducted on classifying shoulder implants. To overcome these problems, we propose a robust deep learning-based framework comprised of an ensemble of convolutional neural networks (CNNs) to classify shoulder implants in X-ray images of different patients. Through our rotational invariant augmentation, the size of the training dataset is increased 36-fold. The modified ResNet and DenseNet are then combined deeply to form a dense residual ensemble-network (DRE-Net). To evaluate DRE-Net, experiments were executed on a 10-fold cross-validation on the openly available shoulder implant X-ray dataset. The experimental results showed that DRE-Net achieved an accuracy, F1-score, precision, and recall of 85.92%, 84.69%, 85.33%, and 84.11%, respectively, which were higher than those of the state-of-the-art methods. Moreover, we confirmed the generalization capability of our network by testing it in an open-world configuration, and the effectiveness of rotational invariant augmentation.
Collapse
Affiliation(s)
| | | | | | | | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (H.S.); (M.O.); (C.P.); (T.M.); (A.H.)
| |
Collapse
|
38
|
Tseng WH, Lee MS, Wang CC, Chen YW, Hsiao TY, Yang TL. Objective evaluation of biomaterial effects after injection laryngoplasty - Introduction of artificial intelligence-based ultrasonic image analysis. Clin Otolaryngol 2021; 46:1028-1036. [PMID: 33787003 DOI: 10.1111/coa.13775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 03/06/2021] [Accepted: 03/14/2021] [Indexed: 11/28/2022]
Abstract
OBJECTIVE Hyaluronic acid (HA) can be degraded over time. However, persistence of the effects after injection laryngoplasty (IL) for unilateral vocal fold paralysis (UVFP), longer than expected from HA longevity, has been observed. The purpose of the study was to develop a methodology with clinical utility for objective evaluation of the temporal change in HA volume after IL using artificial intelligence (AI)-based ultrasonic assessment. DESIGN, SETTING AND PARTICIPANTS Imaging phantoms simulating injected HA were built in different volumes for designing the algorithm for machine learning. Subsequently, five adult patients who had undergone IL with HA for UVFP were recruited for clinical evaluation. MAIN OUTCOME MEASURES Estimated volumes were evaluated for injected HA by the automatic algorithm as well as voice outcomes at 2 weeks, and 2 and 6 months after IL. RESULTS On imaging phantoms, contours on each frame were described well by the algorithm and the volume could be estimated accordingly. The error rates were 0%-9.2%. Moreover, the resultant contours of the HA area were captured in detail for all participants. The estimated volume decreased to an average of 65.76% remaining at 2 months and to a minimal amount at 6 months while glottal closure remained improved. CONCLUSION The volume change of the injected HA over time for an individual was estimated non-invasively by AI-based ultrasonic image analysis. The prolonged effect after treatment, longer than HA longevity, was demonstrated objectively for the first time. The information is beneficial to achieve optimal cost-effectiveness of IL and improve the life quality of the patients.
Collapse
Affiliation(s)
- Wen-Hsuan Tseng
- Department of Otolaryngology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan.,Graduate Institute of Clinical Medicine, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Ming-Sui Lee
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Che-Chai Wang
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yong-Wei Chen
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Tzu-Yu Hsiao
- Department of Otolaryngology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Tsung-Lin Yang
- Department of Otolaryngology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan.,Graduate Institute of Clinical Medicine, National Taiwan University College of Medicine, Taipei, Taiwan.,Research Center for Developmental Biology and Regenerative Medicine, National Taiwan University, Taipei, Taiwan.,Department of Medical Research, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
39
|
Benet D, Pellicer-Valero OJ. Artificial Intelligence: the unstoppable revolution in ophthalmology. Surv Ophthalmol 2021; 67:252-270. [PMID: 33741420 DOI: 10.1016/j.survophthal.2021.03.003] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 01/31/2021] [Accepted: 03/08/2021] [Indexed: 12/18/2022]
Abstract
Artificial Intelligence (AI) is an unstoppable force that is starting to permeate all aspects of our society as part of the revolution being brought into our lives (and into medicine) by the digital era, and accelerated by the current COVID-19 pandemic. As the population ages and developing countries move forward, AI-based systems may be a key asset in streamlining the screening, staging, and treatment planning of sight-threatening eye conditions, offloading the most tedious tasks from the experts, allowing for a greater population coverage, and bringing the best possible care to every patient. This paper presents a review of the state of the art of AI in the field of ophthalmology, focusing on the strengths and weaknesses of current systems, and defining the vision that will enable us to advance scientifically in this digital era. It starts with a thorough yet accessible introduction to the algorithms underlying all modern AI applications. Then, a critical review of the main AI applications in ophthalmology is presented, including Diabetic Retinopathy, Age-Related Macular Degeneration, Retinopathy of Prematurity, Glaucoma, and other AI-related topics such as image enhancement. The review finishes with a brief discussion on the opportunities and challenges that the future of this field might hold.
Collapse
Affiliation(s)
| | - Oscar J Pellicer-Valero
- Intelligent Data Analysis Laboratory, Department of Electronic Engineering, ETSE (Engineering School), Universitat de València (UV), Valencia, Spain
| |
Collapse
|
40
|
Jheng YC, Wang YP, Lin HE, Sung KY, Chu YC, Wang HS, Jiang JK, Hou MC, Lee FY, Lu CL. A novel machine learning-based algorithm to identify and classify lesions and anatomical landmarks in colonoscopy images. Surg Endosc 2021; 36:640-650. [PMID: 33591447 DOI: 10.1007/s00464-021-08331-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 01/13/2021] [Indexed: 02/06/2023]
Abstract
OBJECTIVES Computer-aided diagnosis (CAD)-based artificial intelligence (AI) has been shown to be highly accurate for detecting and characterizing colon polyps. However, the application of AI to identify normal colon landmarks and differentiate multiple colon diseases has not yet been established. We aimed to develop a convolutional neural network (CNN)-based algorithm (GUTAID) to recognize different colon lesions and anatomical landmarks. METHODS Colonoscopic images were obtained to train and validate the AI classifiers. An independent dataset was collected for verification. The architecture of GUTAID contains two major sub-models: the Normal, Polyp, Diverticulum, Cecum and CAncer (NPDCCA) and Narrow-Band Imaging for Adenomatous/Hyperplastic polyps (NBI-AH) models. The development of GUTAID was based on the 16-layer Visual Geometry Group (VGG16) architecture and implemented on Google Cloud Platform. RESULTS In total, 7838 colonoscopy images were used for developing and validating the AI model. An additional 1273 images were independently applied to verify the GUTAID. The accuracy for GUTAID in detecting various colon lesions/landmarks is 93.3% for polyps, 93.9% for diverticula, 91.7% for cecum, 97.5% for cancer, and 83.5% for adenomatous/hyperplastic polyps. CONCLUSIONS A CNN-based algorithm (GUTAID) to identify colonic abnormalities and landmarks was successfully established with high accuracy. This GUTAID system can further characterize polyps for optical diagnosis. We demonstrated that AI classification methodology is feasible to identify multiple and different colon diseases.
Collapse
Affiliation(s)
- Ying-Chun Jheng
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Yen-Po Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Brain Science, National Yang-Ming University School of Medicine, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Hung-En Lin
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Kuang-Yi Sung
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Yuan-Chia Chu
- Information Management Office, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Huann-Sheng Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Colon and Rectum Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Jeng-Kai Jiang
- Division of Colon and Rectum Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Ming-Chih Hou
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Fa-Yauh Lee
- Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Ching-Liang Lu
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan. .,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan. .,Institute of Brain Science, National Yang-Ming University School of Medicine, Taipei, Taiwan. .,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan.
| |
Collapse
|
41
|
Naveed K, Daud F, Madni HA, Khan MA, Khan TM, Naqvi SS. Towards Automated Eye Diagnosis: An Improved Retinal Vessel Segmentation Framework Using Ensemble Block Matching 3D Filter. Diagnostics (Basel) 2021; 11:114. [PMID: 33445723 PMCID: PMC7828181 DOI: 10.3390/diagnostics11010114] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 01/07/2021] [Accepted: 01/08/2021] [Indexed: 12/11/2022] Open
Abstract
Automated detection of vision threatening eye disease based on high resolution retinal fundus images requires accurate segmentation of the blood vessels. In this regard, detection and segmentation of finer vessels, which are obscured by a considerable degree of noise and poor illumination, is particularly challenging. These noises include (systematic) additive noise and multiplicative (speckle) noise, which arise due to various practical limitations of the fundus imaging systems. To address this inherent issue, we present an efficient unsupervised vessel segmentation strategy as a step towards accurate classification of eye diseases from the noisy fundus images. To that end, an ensemble block matching 3D (BM3D) speckle filter is proposed for removal of unwanted noise leading to improved detection. The BM3D-speckle filter, despite its ability to recover finer details (i.e., vessels in fundus images), yields a pattern of checkerboard artifacts in the aftermath of multiplicative (speckle) noise removal. These artifacts are generally ignored in the case of satellite images; however, in the case of fundus images, these artifacts have a degenerating effect on the segmentation or detection of fine vessels. To counter that, an ensemble of BM3D-speckle filter is proposed to suppress these artifacts while further sharpening the recovered vessels. This is subsequently used to devise an improved unsupervised segmentation strategy that can detect fine vessels even in the presence of dominant noise and yields an overall much improved accuracy. Testing was carried out on three publicly available databases namely Structured Analysis of the Retina (STARE), Digital Retinal Images for Vessel Extraction (DRIVE) and CHASE_DB1. We have achieved a sensitivity of 82.88, 81.41 and 82.03 on DRIVE, SATARE, and CHASE_DB1, respectively. The accuracy is also boosted to 95.41, 95.70 and 95.61 on DRIVE, SATARE, and CHASE_DB1, respectively. The performance of the proposed methods on images with pathologies was observed to be more convincing than the performance of similar state-of-the-art methods.
Collapse
Affiliation(s)
- Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (H.A.M.); (S.S.N.)
| | - Faizan Daud
- School of Information Technology, Faculty of Science Engineering & Built Environment, Deakin University, Locked Bag 20000, Geelong, VIC 3220, Australia;
| | - Hussain Ahmad Madni
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (H.A.M.); (S.S.N.)
| | - Mohammad A.U. Khan
- Department of Electrical Engineering, Namal Institute, Mianwali, Namal 42200, Pakistan;
| | - Tariq M. Khan
- School of Information Technology, Faculty of Science Engineering & Built Environment, Deakin University, Locked Bag 20000, Geelong, VIC 3220, Australia;
| | - Syed Saud Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (H.A.M.); (S.S.N.)
| |
Collapse
|
42
|
Chen WF, Ou HY, Liu KH, Li ZY, Liao CC, Wang SY, Huang W, Cheng YF, Pan CT. In-Series U-Net Network to 3D Tumor Image Reconstruction for Liver Hepatocellular Carcinoma Recognition. Diagnostics (Basel) 2020; 11:11. [PMID: 33374672 PMCID: PMC7822491 DOI: 10.3390/diagnostics11010011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 12/16/2020] [Accepted: 12/20/2020] [Indexed: 12/27/2022] Open
Abstract
Cancer is one of the common diseases. Quantitative biomarkers extracted from standard-of-care computed tomography (CT) scan can create a robust clinical decision tool for the diagnosis of hepatocellular carcinoma (HCC). According to the current clinical methods, the situation usually accounts for high expenditure of time and resources. To improve the current clinical diagnosis and therapeutic procedure, this paper proposes a deep learning-based approach, called Successive Encoder-Decoder (SED), to assist in the automatic interpretation of liver lesion/tumor segmentation through CT images. The SED framework consists of two different encoder-decoder networks connected in series. The first network aims to remove unwanted voxels and organs and to extract liver locations from CT images. The second network uses the results of the first network to further segment the lesions. For practical purpose, the predicted lesions on individual CTs were extracted and reconstructed on 3D images. The experiments conducted on 4300 CT images and LiTS dataset demonstrate that the liver segmentation and the tumor prediction achieved 0.92 and 0.75 in Dice score, respectively, by as-proposed SED method.
Collapse
Affiliation(s)
- Wen-Fan Chen
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan;
| | - Hsin-You Ou
- Liver Transplantation Program and Departments of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
| | - Keng-Hao Liu
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| | - Zhi-Yun Li
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| | - Chien-Chang Liao
- Liver Transplantation Program and Departments of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
| | - Shao-Yu Wang
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| | - Wen Huang
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| | - Yu-Fan Cheng
- Liver Transplantation Program and Departments of Diagnostic Radiology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
| | - Cheng-Tang Pan
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan;
- Department of Mechanical and Electro-Mechanical Engineering, National SunYat-sen University, Kaohsiung 80424, Taiwan; (K.-H.L.); (Z.-Y.L.); (S.-Y.W.); (W.H.)
| |
Collapse
|
43
|
Owais M, Arsalan M, Mahmood T, Kang JK, Park KR. Automated Diagnosis of Various Gastrointestinal Lesions Using a Deep Learning-Based Classification and Retrieval Framework With a Large Endoscopic Database: Model Development and Validation. J Med Internet Res 2020; 22:e18563. [PMID: 33242010 PMCID: PMC7728528 DOI: 10.2196/18563] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/16/2020] [Accepted: 11/11/2020] [Indexed: 12/14/2022] Open
Abstract
Background The early diagnosis of various gastrointestinal diseases can lead to effective treatment and reduce the risk of many life-threatening conditions. Unfortunately, various small gastrointestinal lesions are undetectable during early-stage examination by medical experts. In previous studies, various deep learning–based computer-aided diagnosis tools have been used to make a significant contribution to the effective diagnosis and treatment of gastrointestinal diseases. However, most of these methods were designed to detect a limited number of gastrointestinal diseases, such as polyps, tumors, or cancers, in a specific part of the human gastrointestinal tract. Objective This study aimed to develop a comprehensive computer-aided diagnosis tool to assist medical experts in diagnosing various types of gastrointestinal diseases. Methods Our proposed framework comprises a deep learning–based classification network followed by a retrieval method. In the first step, the classification network predicts the disease type for the current medical condition. Then, the retrieval part of the framework shows the relevant cases (endoscopic images) from the previous database. These past cases help the medical expert validate the current computer prediction subjectively, which ultimately results in better diagnosis and treatment. Results All the experiments were performed using 2 endoscopic data sets with a total of 52,471 frames and 37 different classes. The optimal performances obtained by our proposed method in accuracy, F1 score, mean average precision, and mean average recall were 96.19%, 96.99%, 98.18%, and 95.86%, respectively. The overall performance of our proposed diagnostic framework substantially outperformed state-of-the-art methods. Conclusions This study provides a comprehensive computer-aided diagnosis framework for identifying various types of gastrointestinal diseases. The results show the superiority of our proposed method over various other recent methods and illustrate its potential for clinical diagnosis and treatment. Our proposed network can be applicable to other classification domains in medical imaging, such as computed tomography scans, magnetic resonance imaging, and ultrasound sequences.
Collapse
Affiliation(s)
- Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Jin Kyu Kang
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| |
Collapse
|
44
|
Mahmood T, Owais M, Noh KJ, Yoon HS, Haider A, Sultan H, Park KR. Artificial Intelligence-based Segmentation of Nuclei in Multi-organ Histopathology Images: Model Development and Validation (Preprint). JMIR Med Inform 2020. [DOI: 10.2196/24394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
45
|
Sambyal N, Saini P, Syal R, Gupta V. Modified U-Net architecture for semantic segmentation of diabetic retinopathy images. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.05.006] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
46
|
Arsalan M, Baek NR, Owais M, Mahmood T, Park KR. Deep Learning-Based Detection of Pigment Signs for Analysis and Diagnosis of Retinitis Pigmentosa. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3454. [PMID: 32570943 PMCID: PMC7349531 DOI: 10.3390/s20123454] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 06/16/2020] [Accepted: 06/16/2020] [Indexed: 12/24/2022]
Abstract
Ophthalmological analysis plays a vital role in the diagnosis of various eye diseases, such as glaucoma, retinitis pigmentosa (RP), and diabetic and hypertensive retinopathy. RP is a genetic retinal disorder that leads to progressive vision degeneration and initially causes night blindness. Currently, the most commonly applied method for diagnosing retinal diseases is optical coherence tomography (OCT)-based disease analysis. In contrast, fundus imaging-based disease diagnosis is considered a low-cost diagnostic solution for retinal diseases. This study focuses on the detection of RP from the fundus image, which is a crucial task because of the low quality of fundus images and non-cooperative image acquisition conditions. Automatic detection of pigment signs in fundus images can help ophthalmologists and medical practitioners in diagnosing and analyzing RP disorders. To accurately segment pigment signs for diagnostic purposes, we present an automatic RP segmentation network (RPS-Net), which is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background. Because pigment spots can be very small and consist of very few pixels, the RPS-Net provides fine segmentation, even in the case of degraded images, by importing high-frequency information from the preceding layers through concatenation inside and outside the encoder-decoder. To evaluate the proposed RPS-Net, experiments were performed based on 4-fold cross-validation using the publicly available Retinal Images for Pigment Signs (RIPS) dataset for detection and segmentation of retinal pigments. Experimental results show that RPS-Net achieved superior segmentation performance for RP diagnosis, compared with the state-of-the-art methods.
Collapse
Affiliation(s)
| | | | | | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (M.A.); (N.R.B.); (M.O.); (T.M.)
| |
Collapse
|
47
|
Yao HY, Tseng KW, Nguyen HT, Kuo CT, Wang HC. Hyperspectral Ophthalmoscope Images for the Diagnosis of Diabetic Retinopathy Stage. J Clin Med 2020; 9:jcm9061613. [PMID: 32466524 PMCID: PMC7356238 DOI: 10.3390/jcm9061613] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 05/18/2020] [Accepted: 05/25/2020] [Indexed: 12/17/2022] Open
Abstract
A methodology that applies hyperspectral imaging (HSI) on ophthalmoscope images to identify diabetic retinopathy (DR) stage is demonstrated. First, an algorithm for HSI image analysis is applied to the average reflectance spectra of simulated arteries and veins in ophthalmoscope images. Second, the average simulated spectra are categorized by using a principal component analysis (PCA) score plot. Third, Beer-Lambert law is applied to calculate vessel oxygen saturation in the ophthalmoscope images, and oxygenation maps are obtained. The average reflectance spectra and PCA results indicate that average reflectance changes with the deterioration of DR. The G-channel gradually decreases because of vascular disease, whereas the R-channel gradually increases with oxygen saturation in the vessels. As DR deteriorates, the oxygen utilization of retinal tissues gradually decreases, and thus oxygen saturation in the veins gradually increases. The sensitivity of diagnosis is based on the severity of retinopathy due to diabetes. Normal, background DR (BDR), pre-proliferative DR (PPDR), and proliferative DR (PDR) are arranged in order of 90.00%, 81.13%, 87.75%, and 93.75%, respectively; the accuracy is 90%, 86%, 86%, 90%, respectively. The F1-scores are 90% (Normal), 83.49% (BDR), 86.86% (PPDR), and 91.83% (PDR), and the accuracy rates are 95%, 91.5%, 93.5%, and 96%, respectively.
Collapse
Affiliation(s)
- Hsin-Yu Yao
- Department of Ophthalmology, Kaohsiung Armed Forced General Hospital, Kaohsiung City 80284, Taiwan;
| | - Kuang-Wen Tseng
- Department of Medicine, Mackay Medical College, 46, Sec. 3, Zhongzheng Rd., Sanzhi Dist., New Taipei 25245, Taiwan;
| | - Hong-Thai Nguyen
- Department of Mechanical Engineering and Center for Innovative Research on Aging Society (CIRAS), National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan;
| | - Chie-Tong Kuo
- Department of Optometry and Innovation Incubation Center, Shu-Zen Junior College of Medicine and Management, Kaohsiung 821, Taiwan;
| | - Hsiang-Chen Wang
- Department of Mechanical Engineering and Center for Innovative Research on Aging Society (CIRAS), National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan;
- Correspondence:
| |
Collapse
|
48
|
Artificial Intelligence-Based Diagnosis of Cardiac and Related Diseases. J Clin Med 2020; 9:jcm9030871. [PMID: 32209991 PMCID: PMC7141544 DOI: 10.3390/jcm9030871] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 03/17/2020] [Accepted: 03/19/2020] [Indexed: 12/11/2022] Open
Abstract
Automatic chest anatomy segmentation plays a key role in computer-aided disease diagnosis, such as for cardiomegaly, pleural effusion, emphysema, and pneumothorax. Among these diseases, cardiomegaly is considered a perilous disease, involving a high risk of sudden cardiac death. It can be diagnosed early by an expert medical practitioner using a chest X-Ray (CXR) analysis. The cardiothoracic ratio (CTR) and transverse cardiac diameter (TCD) are the clinical criteria used to estimate the heart size for diagnosing cardiomegaly. Manual estimation of CTR and other diseases is a time-consuming process and requires significant work by the medical expert. Cardiomegaly and related diseases can be automatically estimated by accurate anatomical semantic segmentation of CXRs using artificial intelligence. Automatic segmentation of the lungs and heart from the CXRs is considered an intensive task owing to inferior quality images and intensity variations using nonideal imaging conditions. Although there are a few deep learning-based techniques for chest anatomy segmentation, most of them only consider single class lung segmentation with deep complex architectures that require a lot of trainable parameters. To address these issues, this study presents two multiclass residual mesh-based CXR segmentation networks, X-RayNet-1 and X-RayNet-2, which are specifically designed to provide fine segmentation performance with a few trainable parameters compared to conventional deep learning schemes. The proposed methods utilize semantic segmentation to support the diagnostic procedure of related diseases. To evaluate X-RayNet-1 and X-RayNet-2, experiments were performed with a publicly available Japanese Society of Radiological Technology (JSRT) dataset for multiclass segmentation of the lungs, heart, and clavicle bones; two other publicly available datasets, Montgomery County (MC) and Shenzhen X-Ray sets (SC), were evaluated for lung segmentation. The experimental results showed that X-RayNet-1 achieved fine performance for all datasets and X-RayNet-2 achieved competitive performance with a 75% parameter reduction.
Collapse
|
49
|
Mahmood T, Arsalan M, Owais M, Lee MB, Park KR. Artificial Intelligence-Based Mitosis Detection in Breast Cancer Histopathology Images Using Faster R-CNN and Deep CNNs. J Clin Med 2020; 9:E749. [PMID: 32164298 PMCID: PMC7141212 DOI: 10.3390/jcm9030749] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 03/04/2020] [Accepted: 03/06/2020] [Indexed: 01/18/2023] Open
Abstract
Breast cancer is the leading cause of mortality in women. Early diagnosis of breast cancer can reduce the mortality rate. In the diagnosis, the mitotic cell count is an important biomarker for predicting the aggressiveness, prognosis, and grade of breast cancer. In general, pathologists manually examine histopathology images under high-resolution microscopes for the detection of mitotic cells. However, because of the minute differences between the mitotic and normal cells, this process is tiresome, time-consuming, and subjective. To overcome these challenges, artificial-intelligence-based (AI-based) techniques have been developed which automatically detect mitotic cells in the histopathology images. Such AI techniques accelerate the diagnosis and can be used as a second-opinion system for a medical doctor. Previously, conventional image-processing techniques were used for the detection of mitotic cells, which have low accuracy and high computational cost. Therefore, a number of deep-learning techniques that demonstrate outstanding performance and low computational cost were recently developed; however, they still require improvement in terms of accuracy and reliability. Therefore, we present a multistage mitotic-cell-detection method based on Faster region convolutional neural network (Faster R-CNN) and deep CNNs. Two open datasets (international conference on pattern recognition (ICPR) 2012 and ICPR 2014 (MITOS-ATYPIA-14)) of breast cancer histopathology were used in our experiments. The experimental results showed that our method achieves the state-of-the-art results of 0.876 precision, 0.841 recall, and 0.858 F1-measure for the ICPR 2012 dataset, and 0.848 precision, 0.583 recall, and 0.691 F1-measure for the ICPR 2014 dataset, which were higher than those obtained using previous methods. Moreover, we tested the generalization capability of our technique by testing on the tumor proliferation assessment challenge 2016 (TUPAC16) dataset and found that our technique also performs well in a cross-dataset experiment which proved the generalization capability of our proposed technique.
Collapse
Affiliation(s)
| | | | | | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (T.M.); (M.A.); (M.O.); (M.B.L.)
| |
Collapse
|