1
|
Farahat Z, Zrira N, Souissi N, Bennani Y, Bencherif S, Benamar S, Belmekki M, Ngote MN, Megdiche K. Diabetic retinopathy screening through artificial intelligence algorithms: A systematic review. Surv Ophthalmol 2024; 69:707-721. [PMID: 38885761 DOI: 10.1016/j.survophthal.2024.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 05/20/2024] [Accepted: 05/20/2024] [Indexed: 06/20/2024]
Abstract
Diabetic retinopathy (DR) poses a significant challenge in diabetes management, with its progression often asymptomatic until advanced stages. This underscores the urgent need for cost-effective and reliable screening methods. Consequently, the integration of artificial intelligence (AI) tools presents a promising avenue to address this need effectively. We provide an overview of the current state of the art results and techniques in DR screening using AI, while also identifying gaps in research for future exploration. By synthesizing existing database and pinpointing areas requiring further investigation, this paper seeks to guide the direction of future research in the field of automatic diabetic retinopathy screening. There has been a continuous rise in the number of articles detailing deep learning (DL) methods designed for the automatic screening of diabetic retinopathy especially by the year 2021. Researchers utilized various databases, with a primary focus on the IDRiD dataset. This dataset consists of color fundus images captured at an ophthalmological clinic situated in India. It comprises 516 images that depict various stages of DR and diabetic macular edema. Each of the chosen papers concentrates on various DR signs. Nevertheless, a significant portion primarily focused on detecting exudates, which remains insufficient to assess the overall presence of this disease. Various AI methods have been employed to identify DR signs. Among the chosen papers, 4.7 % utilized detection methods, 46.5 % employed classification techniques, 41.9 % relied on segmentation, and 7 % opted for a combination of classification and segmentation. Metrics calculated from 80 % of the articles employing preprocessing techniques demonstrated the significant benefits of this approach in enhancing results quality. In addition, multiple DL techniques, starting by classification, detection then segmentation. Researchers used mostly YOLO for detection, ViT for classification, and U-Net for segmentation. Another perspective on the evolving landscape of AI models for diabetic retinopathy screening lies in the increasing adoption of Convolutional Neural Networks for classification tasks and U-Net architectures for segmentation purposes; however, there is a growing realization within the research community that these techniques, while powerful individually, can be even more effective when integrated. This integration holds promise for not only diagnosing DR, but also accurately classifying its different stages, thereby enabling more tailored treatment strategies. Despite this potential, the development of AI models for DR screening is fraught with challenges. Chief among these is the difficulty in obtaining the high-quality, labeled data necessary for training models to perform effectively. This scarcity of data poses significant barriers to achieving robust performance and can hinder progress in developing accurate screening systems. Moreover, managing the complexity of these models, particularly deep neural networks, presents its own set of challenges. Additionally, interpreting the outputs of these models and ensuring their reliability in real-world clinical settings remain ongoing concerns. Furthermore, the iterative process of training and adapting these models to specific datasets can be time-consuming and resource-intensive. These challenges underscore the multifaceted nature of developing effective AI models for DR screening. Addressing these obstacles requires concerted efforts from researchers, clinicians, and technologists to develop new approaches and overcome existing limitations. By doing so, a full potential of AI may transform DR screening and improve patient outcomes.
Collapse
Affiliation(s)
- Zineb Farahat
- LISTD Laboratory, Mines School of Rabat, Rabat 10000, Morocco; Cheikh Zaïd Foundation Medical Simulation Center, Rabat 10000, Morocco.
| | - Nabila Zrira
- LISTD Laboratory, Mines School of Rabat, Rabat 10000, Morocco
| | | | - Yasmine Bennani
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Soufiane Bencherif
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Safia Benamar
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Mohammed Belmekki
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Mohamed Nabil Ngote
- LISTD Laboratory, Mines School of Rabat, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Kawtar Megdiche
- Cheikh Zaïd Foundation Medical Simulation Center, Rabat 10000, Morocco
| |
Collapse
|
2
|
Rana S, Hosen MJ, Tonni TJ, Rony MAH, Fatema K, Hasan MZ, Rahman MT, Khan RT, Jan T, Whaiduzzaman M. DeepChestGNN: A Comprehensive Framework for Enhanced Lung Disease Identification through Advanced Graphical Deep Features. SENSORS (BASEL, SWITZERLAND) 2024; 24:2830. [PMID: 38732936 PMCID: PMC11086108 DOI: 10.3390/s24092830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 04/06/2024] [Accepted: 04/16/2024] [Indexed: 05/13/2024]
Abstract
Lung diseases are the third-leading cause of mortality in the world. Due to compromised lung function, respiratory difficulties, and physiological complications, lung disease brought on by toxic substances, pollution, infections, or smoking results in millions of deaths every year. Chest X-ray images pose a challenge for classification due to their visual similarity, leading to confusion among radiologists. To imitate those issues, we created an automated system with a large data hub that contains 17 datasets of chest X-ray images for a total of 71,096, and we aim to classify ten different disease classes. For combining various resources, our large datasets contain noise and annotations, class imbalances, data redundancy, etc. We conducted several image pre-processing techniques to eliminate noise and artifacts from images, such as resizing, de-annotation, CLAHE, and filtering. The elastic deformation augmentation technique also generates a balanced dataset. Then, we developed DeepChestGNN, a novel medical image classification model utilizing a deep convolutional neural network (DCNN) to extract 100 significant deep features indicative of various lung diseases. This model, incorporating Batch Normalization, MaxPooling, and Dropout layers, achieved a remarkable 99.74% accuracy in extensive trials. By combining graph neural networks (GNNs) with feedforward layers, the architecture is very flexible when it comes to working with graph data for accurate lung disease classification. This study highlights the significant impact of combining advanced research with clinical application potential in diagnosing lung diseases, providing an optimal framework for precise and efficient disease identification and classification.
Collapse
Affiliation(s)
- Shakil Rana
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.R.); (M.J.H.); (T.J.T.); (M.A.H.R.); (K.F.); (M.Z.H.)
| | - Md Jabed Hosen
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.R.); (M.J.H.); (T.J.T.); (M.A.H.R.); (K.F.); (M.Z.H.)
| | - Tasnim Jahan Tonni
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.R.); (M.J.H.); (T.J.T.); (M.A.H.R.); (K.F.); (M.Z.H.)
| | - Md. Awlad Hossen Rony
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.R.); (M.J.H.); (T.J.T.); (M.A.H.R.); (K.F.); (M.Z.H.)
| | - Kaniz Fatema
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.R.); (M.J.H.); (T.J.T.); (M.A.H.R.); (K.F.); (M.Z.H.)
| | - Md. Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.R.); (M.J.H.); (T.J.T.); (M.A.H.R.); (K.F.); (M.Z.H.)
| | - Md. Tanvir Rahman
- School of Health and Rehabilitation Sciences, The University of Queensland, St. Lucia, QLD 4072, Australia
- Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Tangail 1902, Bangladesh
| | - Risala Tasin Khan
- Institute of Information Technology, Jahangirnagar University, Dhaka 1342, Bangladesh;
| | - Tony Jan
- Centre for Artificial Intelligence Research and Optimisation (AIRO), Torrens University, Ultimo, NSW 2007, Australia;
| | - Md Whaiduzzaman
- Centre for Artificial Intelligence Research and Optimisation (AIRO), Torrens University, Ultimo, NSW 2007, Australia;
- School of Information Systems, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|
3
|
Azam S, Montaha S, Raiaan MAK, Rafid AKMRH, Mukta SH, Jonkman M. An Automated Decision Support System to Analyze Malignancy Patterns of Breast Masses Employing Medically Relevant Features of Ultrasound Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:45-59. [PMID: 38343240 DOI: 10.1007/s10278-023-00925-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 09/22/2023] [Accepted: 10/23/2023] [Indexed: 03/02/2024]
Abstract
An automated computer-aided approach might aid radiologists in diagnosing breast cancer at a primary stage. This study proposes a novel decision support system to classify breast tumors into benign and malignant based on clinically important features, using ultrasound images. Nine handcrafted features, which align with the clinical markers used by radiologists, are extracted from the region of interest (ROI) of ultrasound images. To validate that these elected clinical markers have a significant impact on predicting the benign and malignant classes, ten machine learning (ML) models are experimented with resulting in test accuracies in the range of 96 to 99%. In addition, four feature selection techniques are explored where two features are eliminated according to the feature ranking score of each feature selection method. The Random Forest classifier is trained with the resultant four feature sets. Results indicate that even when eliminating only two features, the performance of the model is reduced for each feature selection technique. These experiments validate the efficiency and effectiveness of the clinically important features. To develop the decision support system, a probability density function (PDF) graph is generated for each feature in order to find a threshold range to distinguish benign and malignant tumors. Based on the threshold range of particular features, a decision support system is developed in such a way that if at least eight out of nine features are within the threshold range, the image will be denoted as true predicted. With this algorithm, a test accuracy of 99.38% and an F1 Score of 99.05% is achieved, which means that our decision support system outperforms all the previously trained ML models. Moreover, after calculating individual class-based test accuracies, for the benign class, a test accuracy of 99.31% has been attained where only three benign instances are misclassified out of 437 instances, and for the malignant class, a test accuracy of 99.52% has been attained where only one malignant instance is misclassified out of 210 instances. This system is robust, time-effective, and reliable as the radiologists' criteria are followed and may aid specialists in making a diagnosis.
Collapse
Affiliation(s)
- Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | | | | | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|