1
|
Tahmasebzadeh A, Sadeghi M, Naseripour M, Mirshahi R, Ghaderi R. Artificial intelligence and different image modalities in uveal melanoma diagnosis and prognosis: A narrative review. Photodiagnosis Photodyn Ther 2025; 52:104528. [PMID: 39986588 DOI: 10.1016/j.pdpdt.2025.104528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2025] [Revised: 02/06/2025] [Accepted: 02/19/2025] [Indexed: 02/24/2025]
Abstract
BACKGROUND The most widespread primary intraocular tumor in adults is called uveal melanoma (UM), if detected early enough, it can be curable. Various methods are available to treat UM, but the most commonly used and effective approach is plaque radiotherapy using Iodine-125 and Ruthenium-106. METHOD The authors performed searches to distinguish relevant studies from 2017 to 2024 by three databases (PubMed, Scopus, and Google Scholar). RESULTS Imaging technologies such as ultrasound (US), fundus photography (FP), optical coherent tomography (OCT), fluorescein angiography (FA), and magnetic resonance images (MRI) play a vital role in the diagnosis and prognosis of UM. The present review assessed the power of different image modalities when integrated with artificial intelligence (AI) to diagnose and prognosis of patients affected by UM. CONCLUSION Finally, after reviewing the studies conducted, it was concluded that AI is a developing tool in image analysis and enhances workflows in diagnosis from data and image processing to clinical decisions, improving tailored treatment scenarios, response prediction, and prognostication.
Collapse
Affiliation(s)
- Atefeh Tahmasebzadeh
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, , Iran
| | - Mahdi Sadeghi
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, , Iran.
| | - Masood Naseripour
- Eye Research Center, The Five Senses Institute, Moheb Kowsar Hospital, Iran University of Medical Sciences, Tehran, Iran; Finetech in Medicine Research Center, Iran University of Medical Sciences, Tehran, Iran.
| | - Reza Mirshahi
- Eye Research Center, The Five Senses Institute, Moheb Kowsar Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Reza Ghaderi
- Department of Electrical Engineering, Shahid Beheshti University, Tehran, Iran
| |
Collapse
|
2
|
Janti SS, Saluja R, Tiwari N, Kolavai RR, Mali K, Arora AJ, Johar A, Sahoo DP, Sahithi E. Evaluation of the Clinical Impact of a Smartphone Application for Cataract Detection. Cureus 2024; 16:e71467. [PMID: 39539903 PMCID: PMC11560082 DOI: 10.7759/cureus.71467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/14/2024] [Indexed: 11/16/2024] Open
Abstract
Background Approximately 10 million people in India suffer from bilateral blindness, with cataracts accounting for roughly 70% of these cases. However, there is a severe scarcity of ophthalmologists in India (12,000 across the country), which makes routine cataract screening very difficult, particularly in rural areas. To tackle this problem, we investigated the use of an artificial intelligence (AI)-based application for cataract screening at All India Institute of Medical Sciences (AIIMS), Bibinagar, that can be used by nursing officers and other healthcare professionals as a primary screening tool. Ophthalmologists from AIIMS Bibinagar additionally validate the results of this application. Purpose The aim of this study was to assess the clinical performance of a smartphone-based cataract screening application that uses an AI module to identify cataracts in photos taken with the device's camera. The study compared the application's results with diagnoses made by ophthalmologists using a slit lamp. Methods At AIIMS Bibinagar, 495 patients participated in a prospective clinical trial. The AI-based screening solution examined smartphone images that were taken in accordance with a set protocol to identify whether cataracts were present. The results of the application were then compared with the diagnoses made by ophthalmologists based on slit-lamp tests. Results The study included 990 eye images. The AI screening application demonstrated an overall accuracy of 90.01% for cataract detection. Specific metrics include a sensitivity of 89.50%, specificity of 89.73%, precision of 91.43%, and an F1 score of 90.36%. The positive predictive value (PPV) was approximately 91.3%, based on 485 true positives and 46 false positives. The negative predictive value (NPV) was approximately 87.6%, based on 402 true negatives and 57 false negatives. Conclusions The smartphone-based cataract screening application proves to be an effective tool for community-level cataract screening in remote areas where access to expensive equipment and specialized ophthalmic care is limited. Its high accuracy and efficiency make it a valuable option for low-resource settings and suitable for home screening, particularly in the post-COVID era.
Collapse
Affiliation(s)
- Siddharam S Janti
- Ophthalmology, All India Institute of Medical Sciences, Bibinagar, Bibinagar, IND
| | - Rohit Saluja
- Biochemistry, All India Institute of Medical Sciences, Bibinagar, Bibinagar, IND
| | - Nivedita Tiwari
- Ophthalmology, All India Institute of Medical Sciences, Bibinagar, Bibinagar, IND
| | | | - Kalpana Mali
- Pharmacology, All India Institute of Medical Sciences, Bibinagar, Bibinagar, IND
| | - Abhishek J Arora
- Radiodiagnosis, All India Institute of Medical Sciences, Bibinagar, Bibinagar, IND
| | - Amita Johar
- Artificial Intelligence and Machine Learning, Samskruti College of Engineering and Technology, Hyderabad, IND
| | - Durgesh Prasad Sahoo
- Community and Family Medicine, All India Institute of Medical Sciences, Bibinagar, Bibinagar, IND
| | - Eereti Sahithi
- Ophthalmology, All India Institute of Medical Sciences, Bibinagar, Bibinagar, IND
| |
Collapse
|
3
|
Liu F, Qin B, Jiang F. Eye Disease Net: an algorithmic model for rapid diagnosis of diseases. PeerJ Comput Sci 2023; 9:e1672. [PMID: 38192448 PMCID: PMC10773917 DOI: 10.7717/peerj-cs.1672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 10/08/2023] [Indexed: 01/10/2024]
Abstract
With the development of science and technology and the improvement of the quality of life, ophthalmic diseases have become one of the major disorders that affect the quality of life of people. In view of this, we propose a new method of ophthalmic disease classification, ED-Net (Eye Disease Classification Net), which is composed of the ED_Resnet model and ED_Xception model, and we compare our ED-Net method with classical classification algorithms, transformer algorithm, more advanced image classification algorithms and ophthalmic disease classification algorithms. We propose the ED_Resnet module and ED_Xception module and reconstruct these two modules into a new image classification algorithm ED-Net, and compared them with classical classification algorithms, transformer algorithms, more advanced image classification algorithms and eye disease classification algorithms.
Collapse
Affiliation(s)
- Fangyuan Liu
- The Second Clinical Medical College, Jinan University, Shenzhen, China
| | - Bo Qin
- The Second Clinical Medical College, Jinan University, Shenzhen, China
- Shenzhen Aier Eye Hospital, Aier Eye Hospital, Jinan University, Shenzhen, China
- Shenzhen Aier Ophthalmic Technology Institute, Shenzhen, China
| | - Fengqi Jiang
- The Second Clinical Medical College, Jinan University, Shenzhen, China
| |
Collapse
|
4
|
Liang J, Jiang W. A ResNet50-DPA model for tomato leaf disease identification. FRONTIERS IN PLANT SCIENCE 2023; 14:1258658. [PMID: 37908831 PMCID: PMC10614023 DOI: 10.3389/fpls.2023.1258658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/18/2023] [Indexed: 11/02/2023]
Abstract
Tomato leaf disease identification is difficult owing to the variety of diseases and complex causes, for which the method based on the convolutional neural network is effective. While it is challenging to capture key features or tends to lose a large number of features when extracting image features by applying this method, resulting in low accuracy of disease identification. Therefore, the ResNet50-DPA model is proposed to identify tomato leaf diseases in the paper. Firstly, an improved ResNet50 is included in the model, which replaces the first layer of convolution in the basic ResNet50 model with the cascaded atrous convolution, facilitating to obtaining of leaf features with different scales. Secondly, in the model, a dual-path attention (DPA) mechanism is proposed to search for key features, where the stochastic pooling is employed to eliminate the influence of non-maximum values, and two convolutions with one dimension are introduced to replace the MLP layer for effectively reducing the damage to leaf information. In addition, to quickly and accurately identify the type of leaf disease, the DPA module is incorporated into the residual module of the improved ResNet50 to obtain an enhanced tomato leaf feature map, which helps to reduce economic losses. Finally, the visualization results of Grad-CAM are presented to show that the ResNet50-DPA model proposed can identify diseases more accurately and improve the interpretability of the model, meeting the need for precise identification of tomato leaf diseases.
Collapse
Affiliation(s)
| | - Wenping Jiang
- School of Electrical and Electronic Engineering, Shanghai Institute of Technology, Shanghai, China
| |
Collapse
|
5
|
Design of Intelligent Diagnosis and Treatment System for Ophthalmic Diseases Based on Deep Neural Network Model. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:4934190. [PMID: 35854765 PMCID: PMC9277203 DOI: 10.1155/2022/4934190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/10/2022] [Accepted: 06/14/2022] [Indexed: 11/18/2022]
Abstract
Artificial intelligence (AI) has developed rapidly in the field of ophthalmology. Fundus images have become a research hotspot because they are easy to obtain and rich in biological information. The application of fundus image analysis (AI) in background image analysis has been deepened and expanded. At present, a variety of AI studies have been carried out in the clinical screening, diagnosis, and prognosis of eye diseases, and the research results have been gradually applied to clinical practice. The application of AI in fundus image analysis will improve the situation of lack of medical resources and low diagnosis efficiency. In the future, the research of AI eye images should focus on the comprehensive intelligent diagnosis of various ophthalmic diseases and complex diseases. The focus is to integrate standardized and high-quality data resources, improve algorithm efficiency, and formulate corresponding clinical research plans.
Collapse
|
6
|
Zhang XQ, Hu Y, Xiao ZJ, Fang JS, Higashita R, Liu J. Machine Learning for Cataract Classification/Grading on Ophthalmic Imaging Modalities: A Survey. MACHINE INTELLIGENCE RESEARCH 2022; 19:184-208. [DOI: 10.1007/s11633-022-1329-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/28/2022] [Indexed: 01/04/2025]
Abstract
AbstractCataracts are the leading cause of visual impairment and blindness globally. Over the years, researchers have achieved significant progress in developing state-of-the-art machine learning techniques for automatic cataract classification and grading, aiming to prevent cataracts early and improve clinicians’ diagnosis efficiency. This survey provides a comprehensive survey of recent advances in machine learning techniques for cataract classification/grading based on ophthalmic images. We summarize existing literature from two research directions: conventional machine learning methods and deep learning methods. This survey also provides insights into existing works of both merits and limitations. In addition, we discuss several challenges of automatic cataract classification/grading based on machine learning techniques and present possible solutions to these challenges for future research.
Collapse
|
7
|
An Image Diagnosis Algorithm for Keratitis Based on Deep Learning. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10716-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
8
|
Luo J, Chen Y, Yang Y, Zhang K, Liu Y, Zhao H, Dong L, Xu J, Li Y, Wei W. Prognosis Prediction of Uveal Melanoma After Plaque Brachytherapy Based on Ultrasound With Machine Learning. Front Med (Lausanne) 2022; 8:777142. [PMID: 35127747 PMCID: PMC8816318 DOI: 10.3389/fmed.2021.777142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/22/2021] [Indexed: 12/29/2022] Open
Abstract
INTRODUCTION Uveal melanoma (UM) is the most common intraocular malignancy in adults. Plaque brachytherapy remains the dominant eyeball-conserving therapy for UM. Tumor regression in UM after plaque brachytherapy has been reported as a valuable prognostic factor. The present study aimed to develop an accurate machine-learning model to predict the 4-year risk of metastasis and death in UM based on ocular ultrasound data. MATERIAL AND METHODS A total of 454 patients with UM were enrolled in this retrospective, single-center study. All patients were followed up for at least 4 years after plaque brachytherapy and underwent ophthalmologic evaluations before the therapy. B-scan ultrasonography was used to measure the basal diameters and thickness of tumors preoperatively and postoperatively. Random Forest (RF) algorithm was used to construct two prediction models: whether a patient will survive for more than 4 years and whether the tumor will develop metastasis within 4 years after treatment. RESULTS Our predictive model achieved an area under the receiver operating characteristic curve (AUC) of 0.708 for predicting death using only a one-time follow-up record. Including the data from two additional follow-ups increased the AUC of the model to 0.883. We attained AUCs of 0.730 and 0.846 with data from one and three-time follow-up, respectively, for predicting metastasis. The model found that the amount of postoperative follow-up data significantly improved death and metastasis prediction accuracy. Furthermore, we divided tumor treatment response into four patterns. The D(decrease)/S(stable) patterns are associated with a significantly better prognosis than the I(increase)/O(other) patterns. CONCLUSIONS The present study developed an RF model to predict the risk of metastasis and death from UM within 4 years based on ultrasound follow-up records following plaque brachytherapy. We intend to further validate our model in prospective datasets, enabling us to implement timely and efficient treatments.
Collapse
Affiliation(s)
- Jingting Luo
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yuning Chen
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yuhang Yang
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- InferVision Healthcare Science and Technology Limited Company, Shanghai, China
| | - Yueming Liu
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Hanqing Zhao
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jie Xu
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yang Li
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
9
|
Deep Learning Applied to SEM Images for Supporting Marine Coralline Algae Classification. DIVERSITY 2021. [DOI: 10.3390/d13120640] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The classification of coralline algae commonly relies on the morphology of cells and reproductive structures, along with thallus organization, observed through Scanning Electron Microscopy (SEM). Nevertheless, species identification based on morphology often leads to uncertainty, due to their general plasticity. Evolutionary and environmental studies featured coralline algae for their ecological significance in both recent and past Oceans and need to rely on robust taxonomy. Research efforts towards new putative diagnostic tools have recently been focused on cell wall ultrastructure. In this work, we explored a new classification tool for coralline algae, using fine-tuning pretrained Convolutional Neural Networks (CNNs) on SEM images paired to morphological categories, including cell wall ultrastructure. We considered four common Mediterranean species, classified at genus and at the species level (Lithothamnion corallioides, Mesophyllum philippii, Lithophyllum racemus, Lithophyllum pseudoracemus). Our model produced promising results in terms of image classification accuracy given the constraint of a limited dataset and was tested for the identification of two ambiguous samples referred to as L. cf. racemus. Overall, explanatory image analyses suggest a high diagnostic value of calcification patterns, which significantly contributed to class predictions. Thus, CNNs proved to be a valid support to the morphological approach to taxonomy in coralline algae.
Collapse
|
10
|
Zhang H, Liu Y, Zhang K, Hui S, Feng Y, Luo J, Li Y, Wei W. Validation of the Relationship Between Iris Color and Uveal Melanoma Using Artificial Intelligence With Multiple Paths in a Large Chinese Population. Front Cell Dev Biol 2021; 9:713209. [PMID: 34490264 PMCID: PMC8417124 DOI: 10.3389/fcell.2021.713209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Accepted: 07/23/2021] [Indexed: 11/24/2022] Open
Abstract
Previous studies have shown that light iris color is a predisposing factor for the development of uveal melanoma (UM) in a population of Caucasian ancestry. However, in all these studies, a remarkably low percentage of patients have brown eyes, so we applied deep learning methods to investigate the correlation between iris color and the prevalence of UM in the Chinese population. All anterior segment photos were automatically segmented with U-NET, and only the iris regions were retained. Then the iris was analyzed with machine learning methods (random forests and convolutional neural networks) to obtain the corresponding iris color spectra (classification probability). We obtained satisfactory segmentation results with high consistency with those from experts. The iris color spectrum is consistent with the raters’ view, but there is no significant correlation with UM incidence.
Collapse
Affiliation(s)
- Haihan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yueming Liu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- SenseTime Group Ltd., Shanghai, China
| | - Shiqi Hui
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yu Feng
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jingting Luo
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yang Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
11
|
Guo X, Khalid MA, Domingos I, Michala AL, Adriko M, Rowell C, Ajambo D, Garrett A, Kar S, Yan X, Reboud J, Tukahebwa EM, Cooper JM. Smartphone-based DNA malaria diagnostics using deep learning for local decision support and blockchain technology for security. NATURE ELECTRONICS 2021; 4:615-624. [PMID: 39651407 PMCID: PMC7617093 DOI: 10.1038/s41928-021-00612-x] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 06/11/2021] [Indexed: 12/11/2024]
Abstract
In infectious disease diagnosis, results need to be rapidly communicated to doctors once testing has been completed, in order for care pathways to be implemented. This is a challenge when testing in remote low-resource rural communities, in which such diseases often create the largest burden. Here we report a smartphone-based end-to-end platform for multiplexed DNA malaria diagnosis. The approach uses a low-cost paper-based microfluidic diagnostic test, which is combined with deep learning algorithms for local decision support and blockchain technology for secure data connectivity and management. We validate the approach via field tests in rural Uganda, where it correctly identified more than 98% of tested cases. Our platform also provides secure geotagged diagnostic information, which creates the possibility of integrating infectious disease data within surveillance frameworks.
Collapse
Affiliation(s)
- Xin Guo
- Division of Biomedical Engineering, The James Watt School of Engineering, University of Glasgow, G12 8LT Glasgow, United Kingdom
| | - Muhammad Arslan Khalid
- Division of Biomedical Engineering, The James Watt School of Engineering, University of Glasgow, G12 8LT Glasgow, United Kingdom
| | - Ivo Domingos
- School of Computing Science, University of Glasgow, Glasgow, G12 8RZ, UK
| | - Anna Lito Michala
- School of Computing Science, University of Glasgow, Glasgow, G12 8RZ, UK
| | - Moses Adriko
- Vector Control Division, Ministry of Health, Kampala, Uganda
| | - Candia Rowell
- Vector Control Division, Ministry of Health, Kampala, Uganda
| | - Diana Ajambo
- Vector Control Division, Ministry of Health, Kampala, Uganda
| | - Alice Garrett
- Division of Biomedical Engineering, The James Watt School of Engineering, University of Glasgow, G12 8LT Glasgow, United Kingdom
| | - Shantimoy Kar
- Division of Biomedical Engineering, The James Watt School of Engineering, University of Glasgow, G12 8LT Glasgow, United Kingdom
| | - Xiaoxiang Yan
- Division of Biomedical Engineering, The James Watt School of Engineering, University of Glasgow, G12 8LT Glasgow, United Kingdom
| | - Julien Reboud
- Division of Biomedical Engineering, The James Watt School of Engineering, University of Glasgow, G12 8LT Glasgow, United Kingdom
| | | | - Jonathan M. Cooper
- Division of Biomedical Engineering, The James Watt School of Engineering, University of Glasgow, G12 8LT Glasgow, United Kingdom
| |
Collapse
|
12
|
Zhang M, Zhang K, Yu D, Xie Q, Liu B, Chen D, Xv D, Li Z, Liu C. Computerized assisted evaluation system for canine cardiomegaly via key points detection with deep learning. Prev Vet Med 2021; 193:105399. [PMID: 34118647 DOI: 10.1016/j.prevetmed.2021.105399] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 05/21/2021] [Accepted: 06/01/2021] [Indexed: 10/21/2022]
Abstract
Cardiomegaly is the main imaging finding for canine heart diseases. There are many advances in the field of medical diagnosing based on imaging with deep learning for human being. However there are also increasing realization of the potential of using deep learning in veterinary medicine. We reported a clinically applicable assisted platform for diagnosing the canine cardiomegaly with deep learning. VHS (vertebral heart score) is a measuring method used for the heart size of a dog. The concrete value of VHS is calculated with the relative position of 16 key points detected by the system, and this result is then combined with VHS reference range of all dog breeds to assist in the evaluation of the canine cardiomegaly. We adopted HRNet (high resolution network) to detect 16 key points (12 and four key points located on vertebra and heart respectively) in 2274 lateral X-ray images (training and validation datasets) of dogs, the model was then used to detect the key points in external testing dataset (396 images), the AP (average performance) for key point detection reach 86.4 %. Then we applied an additional post processing procedure to correct the output of HRNets so that the AP reaches 90.9 %. This result signifies that this system can effectively assist the evaluation of canine cardiomegaly in a real clinical scenario.
Collapse
Affiliation(s)
- Mengni Zhang
- New Ruipeng Pet Healthcare Group Co. LTD., Beijing, 100010, China
| | - Kai Zhang
- New Ruipeng Pet Healthcare Group Co. LTD., Beijing, 100010, China.
| | - Deying Yu
- Hospital University Sains Malaysia, Kota Bharu, 16150, Kelantan, Malaysia
| | - Qianru Xie
- New Ruipeng Pet Healthcare Group Co. LTD., Beijing, 100010, China
| | - Binlong Liu
- New Ruipeng Pet Healthcare Group Co. LTD., Beijing, 100010, China
| | - Dacan Chen
- New Ruipeng Pet Healthcare Group Co. LTD., Beijing, 100010, China
| | - Dongxing Xv
- New Ruipeng Pet Healthcare Group Co. LTD., Beijing, 100010, China
| | - Zhiwei Li
- New Ruipeng Pet Healthcare Group Co. LTD., Beijing, 100010, China
| | - Chaofei Liu
- New Ruipeng Pet Healthcare Group Co. LTD., Beijing, 100010, China
| |
Collapse
|
13
|
Pan Q, Zhang K, He L, Dong Z, Zhang L, Wu X, Wu Y, Gao Y. Automatically Diagnosing Disk Bulge and Disk Herniation With Lumbar Magnetic Resonance Images by Using Deep Convolutional Neural Networks: Method Development Study. JMIR Med Inform 2021; 9:e14755. [PMID: 34018488 PMCID: PMC8178733 DOI: 10.2196/14755] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2019] [Revised: 10/27/2020] [Accepted: 04/15/2021] [Indexed: 02/01/2023] Open
Abstract
Background Disk herniation and disk bulge are two common disorders of lumbar intervertebral disks (IVDs) that often result in numbness, pain in the lower limbs, and lower back pain. Magnetic resonance (MR) imaging is one of the most efficient techniques for detecting lumbar diseases and is widely used for making clinical diagnoses at hospitals. However, there is a lack of efficient tools for effectively interpreting massive amounts of MR images to meet the requirements of many radiologists. Objective The aim of this study was to present an automatic system for diagnosing disk bulge and herniation that saves time and can effectively and significantly reduce the workload of radiologists. Methods The diagnosis of lumbar vertebral disorders is highly dependent on medical images. Therefore, we chose the two most common diseases—disk bulge and herniation—as research subjects. This study is mainly about identifying the position of IVDs (lumbar vertebra [L] 1 to L2, L2-L3, L3-L4, L4-L5, and L5 to sacral vertebra [S] 1) by analyzing the geometrical relationship between sagittal and axial images and classifying axial lumbar disk MR images via deep convolutional neural networks. Results This system involved 4 steps. In the first step, it automatically located vertebral bodies (including the L1, L2, L3, L4, L5, and S1) in sagittal images by using the faster region-based convolutional neural network, and our fourfold cross-validation showed 100% accuracy. In the second step, it spontaneously identified the corresponding disk in each axial lumbar disk MR image with 100% accuracy. In the third step, the accuracy for automatically locating the intervertebral disk region of interest in axial MR images was 100%. In the fourth step, the 3-class classification (normal disk, disk bulge, and disk herniation) accuracies for the L1-L2, L2-L3, L3-L4, L4-L5, and L5-S1 IVDs were 92.7%, 84.4%, 92.1%, 90.4%, and 84.2%, respectively. Conclusions The automatic diagnosis system was successfully built, and it could classify images of normal disks, disk bulge, and disk herniation. This system provided a web-based test for interpreting lumbar disk MR images that could significantly improve diagnostic efficiency and standardized diagnosis reports. This system can also be used to detect other lumbar abnormalities and cervical spondylosis.
Collapse
Affiliation(s)
- Qiong Pan
- School of Telecommunications Engineering, Xidian University, Xi'an, China.,College of Science, Northwest A&F University, Yangling, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, China.,SenseTime Group Limited, Shanghai, China
| | - Lin He
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Zhou Dong
- School of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Lei Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Wu
- Medical Imaging Department, The Affiliated Hospital of Northwest University Xi'an Number 3 Hospital, Xi'an, China
| | - Yanjun Gao
- Xi'an Key Laboratory of Cardiovascular and Cerebrovascular Diseases, The Affiliated Hospital of Northwest University Xi'an Number 3 Hospital, Xi'an, China
| |
Collapse
|
14
|
Jiang J, Lei S, Zhu M, Li R, Yue J, Chen J, Li Z, Gong J, Lin D, Wu X, Lin Z, Lin H. Improving the Generalizability of Infantile Cataracts Detection via Deep Learning-Based Lens Partition Strategy and Multicenter Datasets. Front Med (Lausanne) 2021; 8:664023. [PMID: 34026791 PMCID: PMC8137827 DOI: 10.3389/fmed.2021.664023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 03/22/2021] [Indexed: 11/13/2022] Open
Abstract
Infantile cataract is the main cause of infant blindness worldwide. Although previous studies developed artificial intelligence (AI) diagnostic systems for detecting infantile cataracts in a single center, its generalizability is not ideal because of the complicated noises and heterogeneity of multicenter slit-lamp images, which impedes the application of these AI systems in real-world clinics. In this study, we developed two lens partition strategies (LPSs) based on deep learning Faster R-CNN and Hough transform for improving the generalizability of infantile cataracts detection. A total of 1,643 multicenter slit-lamp images collected from five ophthalmic clinics were used to evaluate the performance of LPSs. The generalizability of Faster R-CNN for screening and grading was explored by sequentially adding multicenter images to the training dataset. For the normal and abnormal lenses partition, the Faster R-CNN achieved the average intersection over union of 0.9419 and 0.9107, respectively, and their average precisions are both > 95%. Compared with the Hough transform, the accuracy, specificity, and sensitivity of Faster R-CNN for opacity area grading were improved by 5.31, 8.09, and 3.29%, respectively. Similar improvements were presented on the other grading of opacity density and location. The minimal training sample size required by Faster R-CNN is determined on multicenter slit-lamp images. Furthermore, the Faster R-CNN achieved real-time lens partition with only 0.25 s for a single image, whereas the Hough transform needs 34.46 s. Finally, using Grad-Cam and t-SNE techniques, the most relevant lesion regions were highlighted in heatmaps, and the high-level features were discriminated. This study provides an effective LPS for improving the generalizability of infantile cataracts detection. This system has the potential to be applied to multicenter slit-lamp images.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Shutao Lei
- School of Communications and Information Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi'an, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiayun Yue
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiamin Gong
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
15
|
Pathological Myopia Image Recognition Strategy Based on Data Augmentation and Model Fusion. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5549779. [PMID: 34035883 PMCID: PMC8118733 DOI: 10.1155/2021/5549779] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 04/02/2021] [Accepted: 04/27/2021] [Indexed: 11/17/2022]
Abstract
The automatic diagnosis of various retinal diseases based on fundus images is important in supporting clinical decision-making. Convolutional neural networks (CNNs) have achieved remarkable results in such tasks. However, their high expression ability possibly leads to overfitting. Therefore, data augmentation (DA) techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters render traditional DA techniques insufficient. In this study, we proposed a new DA strategy based on multimodal fusion (DAMF) which could integrate the standard DA method, data disrupting method, data mixing method, and autoadjustment method to enhance the image data in the training dataset to create new training images. In addition, we fused the results of the classifier by voting on the basis of DAMF, which further improved the generalization ability of the model. The experimental results showed that the optimal DA mode could be matched to the image dataset through our DA strategy. We evaluated DAMF on the iChallenge-PM dataset. At last, we compared training results between 12 DAMF processed datasets and the original training dataset. Compared with the original dataset, the optimal DAMF achieved an accuracy increase of 2.85% on iChallenge-PM.
Collapse
|
16
|
Jiang J, Wang L, Fu H, Long E, Sun Y, Li R, Li Z, Zhu M, Liu Z, Chen J, Lin Z, Wu X, Wang D, Liu X, Lin H. Automatic classification of heterogeneous slit-illumination images using an ensemble of cost-sensitive convolutional neural networks. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:550. [PMID: 33987248 DOI: 10.21037/atm-20-6635] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background Lens opacity seriously affects the visual development of infants. Slit-illumination images play an irreplaceable role in lens opacity detection; however, these images exhibited varied phenotypes with severe heterogeneity and complexity, particularly among pediatric cataracts. Therefore, it is urgently needed to explore an effective computer-aided method to automatically diagnose heterogeneous lens opacity and to provide appropriate treatment recommendations in a timely manner. Methods We integrated three different deep learning networks and a cost-sensitive method into an ensemble learning architecture, and then proposed an effective model called CCNN-Ensemble [ensemble of cost-sensitive convolutional neural networks (CNNs)] for automatic lens opacity detection. A total of 470 slit-illumination images of pediatric cataracts were used for training and comparison between the CCNN-Ensemble model and conventional methods. Finally, we used two external datasets (132 independent test images and 79 Internet-based images) to further evaluate the model's generalizability and effectiveness. Results Experimental results and comparative analyses demonstrated that the proposed method was superior to conventional approaches and provided clinically meaningful performance in terms of three grading indices of lens opacity: area (specificity and sensitivity; 92.00% and 92.31%), density (93.85% and 91.43%) and opacity location (95.25% and 89.29%). Furthermore, the comparable performance on the independent testing dataset and the internet-based images verified the effectiveness and generalizability of the model. Finally, we developed and implemented a website-based automatic diagnosis software for pediatric cataract grading diagnosis in ophthalmology clinics. Conclusions The CCNN-Ensemble method demonstrates higher specificity and sensitivity than conventional methods on multi-source datasets. This study provides a practical strategy for heterogeneous lens opacity diagnosis and has the potential to be applied to the analysis of other medical images.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Haoran Fu
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yibin Sun
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi'an, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
17
|
Pratap T, Kokil P. Efficient network selection for computer-aided cataract diagnosis under noisy environment. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105927. [PMID: 33485073 DOI: 10.1016/j.cmpb.2021.105927] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 12/31/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided cataract diagnosis (CACD) methods play a crucial role in early detection of cataract. The existing CACD methods are suffering from performance diminution due to the presence of noise in digital fundus retinal images. The lack of robustness in CACD methods against noise is a serious concern since even the presence of small noise levels may degrade the performance of cataract detection. However, noise in fundus retinal images is unavoidable due to various processes involved in the acquisition or transmission. Hence, a robust CACD method against noisy conditions is required to diagnose the cataract accurately. METHODS In this paper, an efficient network selection based robust CACD method under additive white Gaussian noise (AWGN) is proposed. The presented method consists a set of locally- and globally-trained independent support vector networks with features extracted at various noise levels. A suitable network is then selected based on the noise level present in the input image. The automatic feature extraction technique using pre-trained convolutional neural network (CNN) is adopted to extract features from input fundus retinal images. RESULTS A good-quality fundus retinal image dataset is obtained from EyePACS dataset with the use of natural image quality evaluator (NIQE) score. The synthetic noisy fundus retinal images are then generated artificially from good-quality fundus retinal images using AWGN model for effective analysis. The analysis is carried out with existing CNN based CACD methods at different noise levels. From results it is obvious that the proposed CACD method is superior in exhibiting robust performance against AWGN than existing CNN based CACD methods. CONCLUSIONS From the experimental results, it is clear that the proposed method show superior performance against noise when compared with existing methods in literature. The proposed method can be useful as a starting point to continue further research on CNN based robust CACD methods.
Collapse
Affiliation(s)
- Turimerla Pratap
- Department of Electronics and Communication Engineering, Indian Institute of Information Technology Design and Manufacturing, Kancheepuram, Chennai 600127, India
| | - Priyanka Kokil
- Department of Electronics and Communication Engineering, Indian Institute of Information Technology Design and Manufacturing, Kancheepuram, Chennai 600127, India.
| |
Collapse
|
18
|
Li J, Wang P, Zhou Y, Liang H, Luan K. Different Machine Learning and Deep Learning Methods for the Classification of Colorectal Cancer Lymph Node Metastasis Images. Front Bioeng Biotechnol 2021; 8:620257. [PMID: 33520971 PMCID: PMC7841386 DOI: 10.3389/fbioe.2020.620257] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 12/14/2020] [Indexed: 12/14/2022] Open
Abstract
The classification of colorectal cancer (CRC) lymph node metastasis (LNM) is a vital clinical issue related to recurrence and design of treatment plans. However, it remains unclear which method is effective in automatically classifying CRC LNM. Hence, this study compared the performance of existing classification methods, i.e., machine learning, deep learning, and deep transfer learning, to identify the most effective method. A total of 3,364 samples (1,646 positive and 1,718 negative) from Harbin Medical University Cancer Hospital were collected. All patches were manually segmented by experienced radiologists, and the image size was based on the lesion to be intercepted. Two classes of global features and one class of local features were extracted from the patches. These features were used in eight machine learning algorithms, while the other models used raw data. Experiment results showed that deep transfer learning was the most effective method with an accuracy of 0.7583 and an area under the curve of 0.7941. Furthermore, to improve the interpretability of the results from the deep learning and deep transfer learning models, the classification heat-map features were used, which displayed the region of feature extraction by superposing with raw data. The research findings are expected to promote the use of effective methods in CRC LNM detection and hence facilitate the design of proper treatment plans.
Collapse
Affiliation(s)
- Jin Li
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Peng Wang
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Yang Zhou
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, China
| | - Hong Liang
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Kuan Luan
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| |
Collapse
|
19
|
Dense anatomical annotation of slit-lamp images improves the performance of deep learning for the diagnosis of ophthalmic disorders. Nat Biomed Eng 2020; 4:767-777. [PMID: 32572198 DOI: 10.1038/s41551-020-0577-y] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Accepted: 05/23/2020] [Indexed: 11/08/2022]
Abstract
The development of artificial intelligence algorithms typically demands abundant high-quality data. In medicine, the datasets that are required to train the algorithms are often collected for a single task, such as image-level classification. Here, we report a workflow for the segmentation of anatomical structures and the annotation of pathological features in slit-lamp images, and the use of the workflow to improve the performance of a deep-learning algorithm for diagnosing ophthalmic disorders. We used the workflow to generate 1,772 general classification labels, 13,404 segmented anatomical structures and 8,329 pathological features from 1,772 slit-lamp images. The algorithm that was trained with the image-level classification labels and the anatomical and pathological labels showed better diagnostic performance than the algorithm that was trained with only the image-level classification labels, performed similar to three ophthalmologists across four clinically relevant retrospective scenarios and correctly diagnosed most of the consensus outcomes of 615 clinical reports in prospective datasets for the same four scenarios. The dense anatomical annotation of medical images may improve their use for automated classification and detection tasks.
Collapse
|
20
|
Wu X, Liu L, Zhao L, Guo C, Li R, Wang T, Yang X, Xie P, Liu Y, Lin H. Application of artificial intelligence in anterior segment ophthalmic diseases: diversity and standardization. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:714. [PMID: 32617334 PMCID: PMC7327317 DOI: 10.21037/atm-20-976] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Artificial intelligence (AI) based on machine learning (ML) and deep learning (DL) techniques has gained tremendous global interest in this era. Recent studies have demonstrated the potential of AI systems to provide improved capability in various tasks, especially in image recognition field. As an image-centric subspecialty, ophthalmology has become one of the frontiers of AI research. Trained on optical coherence tomography, slit-lamp images and even ordinary eye images, AI can achieve robust performance in the detection of glaucoma, corneal arcus and cataracts. Moreover, AI models based on other forms of data also performed satisfactorily. Nevertheless, several challenges with AI application in ophthalmology have also arisen, including standardization of data sets, validation and applicability of AI models, and ethical issues. In this review, we provided a summary of the state-of-the-art AI application in anterior segment ophthalmic diseases, potential challenges in clinical implementation and our prospects.
Collapse
Affiliation(s)
- Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ting Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaonan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Peichen Xie
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
21
|
Zhang Y, Li F, Yuan F, Zhang K, Huo L, Dong Z, Lang Y, Zhang Y, Wang M, Gao Z, Qin Z, Shen L. Diagnosing chronic atrophic gastritis by gastroscopy using artificial intelligence. Dig Liver Dis 2020; 52:566-572. [PMID: 32061504 DOI: 10.1016/j.dld.2019.12.146] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 12/28/2019] [Accepted: 12/31/2019] [Indexed: 12/11/2022]
Abstract
BACKGROUND The sensitivity of endoscopy in diagnosing chronic atrophic gastritis is only 42%, and multipoint biopsy, despite being more accurate, is not always available. AIMS This study aimed to construct a convolutional neural network to improve the diagnostic rate of chronic atrophic gastritis. METHODS We collected 5470 images of the gastric antrums of 1699 patients and labeled them with their pathological findings. Of these, 3042 images depicted atrophic gastritis and 2428 did not. We designed and trained a convolutional neural network-chronic atrophic gastritis model to diagnose atrophic gastritis accurately, verified by five-fold cross-validation. Moreover, the diagnoses of the deep learning model were compared with those of three experts. RESULTS The diagnostic accuracy, sensitivity, and specificity of the convolutional neural network-chronic atrophic gastritis model in diagnosing atrophic gastritis were 0.942, 0.945, and 0.940, respectively, which were higher than those of the experts. The detection rates of mild, moderate, and severe atrophic gastritis were 93%, 95%, and 99%, respectively. CONCLUSION Chronic atrophic gastritis could be diagnosed by gastroscopic images using the convolutional neural network-chronic atrophic gastritis model. This may greatly reduce the burden on endoscopy physicians, simplify diagnostic routines, and reduce costs for doctors and patients.
Collapse
Affiliation(s)
- Yaqiong Zhang
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Fengxia Li
- Department of Gastroenterology, Shanxi Provincial People's Hospital, Taiyuan, China.
| | - Fuqiang Yuan
- Baidu Online Network Technology (Beijing) Corporation, Beijing, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Lijuan Huo
- Department of Gastroenterology, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Zichen Dong
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yiming Lang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yapeng Zhang
- Fenyang College of Shanxi Medical University, Fenyang, China
| | - Meihong Wang
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Zenghui Gao
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Zhenzhen Qin
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Leixue Shen
- School of Computer Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
22
|
Tong Y, Lu W, Yu Y, Shen Y. Application of machine learning in ophthalmic imaging modalities. EYE AND VISION 2020; 7:22. [PMID: 32322599 PMCID: PMC7160952 DOI: 10.1186/s40662-020-00183-6] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 03/10/2020] [Indexed: 12/27/2022]
Abstract
In clinical ophthalmology, a variety of image-related diagnostic techniques have begun to offer unprecedented insights into eye diseases based on morphological datasets with millions of data points. Artificial intelligence (AI), inspired by the human multilayered neuronal system, has shown astonishing success within some visual and auditory recognition tasks. In these tasks, AI can analyze digital data in a comprehensive, rapid and non-invasive manner. Bioinformatics has become a focus particularly in the field of medical imaging, where it is driven by enhanced computing power and cloud storage, as well as utilization of novel algorithms and generation of data in massive quantities. Machine learning (ML) is an important branch in the field of AI. The overall potential of ML to automatically pinpoint, identify and grade pathological features in ocular diseases will empower ophthalmologists to provide high-quality diagnosis and facilitate personalized health care in the near future. This review offers perspectives on the origin, development, and applications of ML technology, particularly regarding its applications in ophthalmic imaging modalities.
Collapse
Affiliation(s)
- Yan Tong
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Wei Lu
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yue Yu
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yin Shen
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China.,2Medical Research Institute, Wuhan University, Wuhan, Hubei China
| |
Collapse
|
23
|
A human-in-the-loop deep learning paradigm for synergic visual evaluation in children. Neural Netw 2020; 122:163-173. [DOI: 10.1016/j.neunet.2019.10.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 08/11/2019] [Accepted: 10/01/2019] [Indexed: 11/20/2022]
|
24
|
Zhang X, Zhang K, Lin D, Zhu Y, Chen C, He L, Guo X, Chen K, Wang R, Liu Z, Wu X, Long E, Huang K, He Z, Liu X, Lin H. Artificial intelligence deciphers codes for color and odor perceptions based on large-scale chemoinformatic data. Gigascience 2020; 9:giaa011. [PMID: 32101298 PMCID: PMC7043059 DOI: 10.1093/gigascience/giaa011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 10/19/2019] [Accepted: 01/30/2020] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Color vision is the ability to detect, distinguish, and analyze the wavelength distributions of light independent of the total intensity. It mediates the interaction between an organism and its environment from multiple important aspects. However, the physicochemical basis of color coding has not been explored completely, and how color perception is integrated with other sensory input, typically odor, is unclear. RESULTS Here, we developed an artificial intelligence platform to train algorithms for distinguishing color and odor based on the large-scale physicochemical features of 1,267 and 598 structurally diverse molecules, respectively. The predictive accuracies achieved using the random forest and deep belief network for the prediction of color were 100% and 95.23% ± 0.40% (mean ± SD), respectively. The predictive accuracies achieved using the random forest and deep belief network for the prediction of odor were 93.40% ± 0.31% and 94.75% ± 0.44% (mean ± SD), respectively. Twenty-four physicochemical features were sufficient for the accurate prediction of color, while 39 physicochemical features were sufficient for the accurate prediction of odor. A positive correlation between the color-coding and odor-coding properties of the molecules was predicted. A group of descriptors was found to interlink prominently in color and odor perceptions. CONCLUSIONS Our random forest model and deep belief network accurately predicted the colors and odors of structurally diverse molecules. These findings extend our understanding of the molecular and structural basis of color vision and reveal the interrelationship between color and odor perceptions in nature.
Collapse
Affiliation(s)
- Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
- School of Computer Science and Technology, Xidian University, Tai Bai South Road 2#, Xi'an 710000, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, 1120 NW 14th Street, Miami, FL 33136, USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, 1120 NW 14th Street, Miami, FL 33136, USA
| | - Lin He
- School of Computer Science and Technology, Xidian University, Tai Bai South Road 2#, Xi'an 710000, China
| | - Xusen Guo
- Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education School of Data and Computer Science, Sun Yat-Sen University, Wai Huan East Road 132#, Guangzhou 510000, China
| | - Kexin Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
| | - Ruixin Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
| | - Kai Huang
- Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education School of Data and Computer Science, Sun Yat-Sen University, Wai Huan East Road 132#, Guangzhou 510000, China
| | - Zhiqiang He
- Key Laboratory of Universal Wireless Communications, Beijing University of Posts and Telecommunications, West Tu Cheng Road 10#, Beijing 100876, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Tai Bai South Road 2#, Xi'an 710000, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou 510060, China
- Center of Precision Medicine, Sun Yat-sen University, Xin Guang West Road 135#, Guangzhou 510080, China
| |
Collapse
|
25
|
Yang J, Zhang K, Fan H, Huang Z, Xiang Y, Yang J, He L, Zhang L, Yang Y, Li R, Zhu Y, Chen C, Liu F, Yang H, Deng Y, Tan W, Deng N, Yu X, Xuan X, Xie X, Liu X, Lin H. Development and validation of deep learning algorithms for scoliosis screening using back images. Commun Biol 2019; 2:390. [PMID: 31667364 PMCID: PMC6814825 DOI: 10.1038/s42003-019-0635-8] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 09/24/2019] [Indexed: 02/08/2023] Open
Abstract
Adolescent idiopathic scoliosis is the most common spinal disorder in adolescents with a prevalence of 0.5-5.2% worldwide. The traditional methods for scoliosis screening are easily accessible but require unnecessary referrals and radiography exposure due to their low positive predictive values. The application of deep learning algorithms has the potential to reduce unnecessary referrals and costs in scoliosis screening. Here, we developed and validated deep learning algorithms for automated scoliosis screening using unclothed back images. The accuracies of the algorithms were superior to those of human specialists in detecting scoliosis, detecting cases with a curve ≥20°, and severity grading for both binary classifications and the four-class classification. Our approach can be potentially applied in routine scoliosis screening and periodic follow-ups of pretreatment cases without radiation exposure.
Collapse
Affiliation(s)
- Junlin Yang
- Spine Center, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Hengwei Fan
- Spine Center, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Zifang Huang
- Department of Spine Surgery, the 1st Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong China
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
| | - Jingfan Yang
- Spine Center, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Lin He
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Lei Zhang
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL USA
| | - Fan Liu
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Haoqing Yang
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Yaolong Deng
- Spine Center, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Weiqing Tan
- Health Promotion Centre for Primary and Secondary Schools of Guangzhou Municipality, Guangzhou, Guangdong China
| | - Nali Deng
- Health Promotion Centre for Primary and Secondary Schools of Guangzhou Municipality, Guangzhou, Guangdong China
| | - Xuexiang Yu
- Department of Sports and Arts, Guangzhou Sport University, Guangzhou, Guangdong China
| | - Xiaoling Xuan
- Xinmiao Scoliosis Prevention of Guangdong Province, Guangzhou, Guangdong China
| | - Xiaofeng Xie
- Xinmiao Scoliosis Prevention of Guangdong Province, Guangzhou, Guangdong China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
- Center for Precision Medicine, Sun Yat-sen University, Guangzhou, Guangdong China
| |
Collapse
|
26
|
Zhang K, Pan Q, Yu D, Wang L, Liu Z, Li X, Liu X. Systemically modeling the relationship between climate change and wheat aphid abundance. THE SCIENCE OF THE TOTAL ENVIRONMENT 2019; 674:392-400. [PMID: 31005841 DOI: 10.1016/j.scitotenv.2019.04.143] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 03/21/2019] [Accepted: 04/10/2019] [Indexed: 06/09/2023]
Abstract
Climate change influences all living beings. Wheat aphids deplete the nutritional value of wheat and affect the production of wheat in changing climate. In this study, we attempt to explain the ecological mechanisms of how climate change affects wheat aphids by simulating the relationship between climate and the abundance of wheat aphids, which will not only aid in improving wheat aphid forecasting and the effectiveness of prevention and treatment, but also help mitigate food crises. Fuzzy cognitive maps (FCM) are an effective tool for portraying complex systems. Using Sitobion avenae and climatological data collected in China, we made use of differential evolution (DE) algorithms to construct FCM models that directly illustrate the effect of climate on wheat aphid abundance. The relationships among climate and wheat aphids at different growth stages (I-III instar larvae, IV instar larvae with wings, IV instar larvae without wings, adult with wings, adult without wings) were established. The analysis results from the FCM models show that temperature positively influences wheat aphids most. Moreover, these models can be used to determine the numerical value of each climate factor and the abundance of wheat aphids quantitatively. Furthermore, the two overall relationship models between climate and wheat aphids were constructed and the experimental results show that natural enemies and highest daily temperature affect wheat aphids most. Natural enemies and highest daily temperature exert negative and positive impacts on wheat aphids respectively. Some interrelationships among wheat aphids at all growth stages and the internal relationships among climate factors were also shown.
Collapse
Affiliation(s)
- Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi'an 710071, China
| | - Qiong Pan
- School of Telecommunications Engineering, Xidian University, Xi'an 710071, China; School of Science, Northwestern A&F University, Yangling, Shaanxi 712100, China
| | - Deying Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, Xi'an 710071, China; Institute of Software Engineering, Xidian University, Xi'an 710071, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Xue Li
- School of Computer Science, Northwestern Polytechnical University, Xi'an 710072, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi'an 710071, China; Institute of Software Engineering, Xidian University, Xi'an 710071, China.
| |
Collapse
|
27
|
Zhang K, Liu X, Jiang J, Li W, Wang S, Liu L, Zhou X, Wang L. Prediction of postoperative complications of pediatric cataract patients using data mining. J Transl Med 2019; 17:2. [PMID: 30602368 PMCID: PMC6317183 DOI: 10.1186/s12967-018-1758-2] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 12/21/2018] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND The common treatment for pediatric cataracts is to replace the cloudy lens with an artificial one. However, patients may suffer complications (severe lens proliferation into the visual axis and abnormal high intraocular pressure; SLPVA and AHIP) within 1 year after surgery and factors causing these complications are unknown. METHODS Apriori algorithm is employed to find association rules related to complications. We use random forest (RF) and Naïve Bayesian (NB) to predict the complications with datasets preprocessed by SMOTE (synthetic minority oversampling technique). Genetic feature selection is exploited to find real features related to complications. RESULTS Average classification accuracies in three binary classification problems are over 75%. Second, the relationship between the classification performance and the number of random forest tree is studied. Results show except for gender and age at surgery (AS); other attributes are related to complications. Except for the secondary IOL placement, operation mode, AS and area of cataracts; other attributes are related to SLPVA. Except for the gender, operation mode, and laterality; other attributes are related to the AHIP. Next, the association rules related to the complications are mined out. Then additional 50 data were used to test the performance of RF and NB, both of then obtained the accuracies of over 65% for three classification problems. Finally, we developed a webserver to assist doctors. CONCLUSIONS The postoperative complications of pediatric cataracts patients can be predicted. Then the factors related to the complications are found. Finally, the association rules that is about the complications can provide reference to doctors.
Collapse
Affiliation(s)
- Kai Zhang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China. .,Institute of Software Engineering, Xidian University, Xi'an, 710071, China. .,School of Software, Xidian University, Xi'an, 710071, China.
| | - Jiewei Jiang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Wangting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Shuai Wang
- School of Software, Xidian University, Xi'an, 710071, China
| | - Lin Liu
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China
| | - Xiaojing Zhou
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,Institute of Software Engineering, Xidian University, Xi'an, 710071, China.,School of Software, Xidian University, Xi'an, 710071, China
| |
Collapse
|
28
|
Jiang J, Liu X, Liu L, Wang S, Long E, Yang H, Yuan F, Yu D, Zhang K, Wang L, Liu Z, Wang D, Xi C, Lin Z, Wu X, Cui J, Zhu M, Lin H. Predicting the progression of ophthalmic disease based on slit-lamp images using a deep temporal sequence network. PLoS One 2018; 13:e0201142. [PMID: 30063738 PMCID: PMC6067742 DOI: 10.1371/journal.pone.0201142] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Accepted: 06/12/2018] [Indexed: 11/21/2022] Open
Abstract
Ocular images play an essential role in ophthalmology. Current research mainly focuses on computer-aided diagnosis using slit-lamp images, however few studies have been done to predict the progression of ophthalmic disease. Therefore exploring an effective approach of prediction can help to plan treatment strategies and to provide early warning for the patients. In this study, we present an end-to-end temporal sequence network (TempSeq-Net) to automatically predict the progression of ophthalmic disease, which includes employing convolutional neural network (CNN) to extract high-level features from consecutive slit-lamp images and applying long short term memory (LSTM) method to mine the temporal relationship of features. First, we comprehensively compare six potential combinations of CNNs and LSTM (or recurrent neural network) in terms of effectiveness and efficiency, to obtain the optimal TempSeq-Net model. Second, we analyze the impacts of sequence lengths on model's performance which help to evaluate their stability and validity and to determine the appropriate range of sequence lengths. The quantitative results demonstrated that our proposed model offers exceptional performance with mean accuracy (92.22), sensitivity (88.55), specificity (94.31) and AUC (97.18). Moreover, the model achieves real-time prediction with only 27.6ms for single sequence, and simultaneously predicts sequence data with lengths of 3-5. Our study provides a promising strategy for the progression of ophthalmic disease, and has the potential to be applied in other medical fields.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Computer Science and Technology, Xidian University, Xi’an, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi’an, China
- School of Software, Xidian University, Xi’an, China
| | - Lin Liu
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Shuai Wang
- School of Software, Xidian University, Xi’an, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haoqing Yang
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Fuqiang Yuan
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Deying Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, Xi’an, China
- School of Software, Xidian University, Xi’an, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Changzun Xi
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiangtao Cui
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi’an, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
29
|
Jiang J, Liu X, Zhang K, Long E, Wang L, Li W, Liu L, Wang S, Zhu M, Cui J, Liu Z, Lin Z, Li X, Chen J, Cao Q, Li J, Wu X, Wang D, Wang J, Lin H. Automatic diagnosis of imbalanced ophthalmic images using a cost-sensitive deep convolutional neural network. Biomed Eng Online 2017; 16:132. [PMID: 29157240 PMCID: PMC5697161 DOI: 10.1186/s12938-017-0420-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 11/07/2017] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Ocular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial. METHODS In this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient. RESULTS Qualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method. CONCLUSION Our study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Wangting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Lin Liu
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Shuai Wang
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi’an, 710071 China
| | - Jiangtao Cui
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Xiaoyan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Qianzhong Cao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jing Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jinghui Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| |
Collapse
|
30
|
Abstract
Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F1 score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.
Collapse
Affiliation(s)
- Arturo Magana-Mora
- King Abdullah University of Science and Technology (KAUST), Computational Bioscience Research Center, Thuwal, 23955-6900, Saudi Arabia
| | - Vladimir B Bajic
- King Abdullah University of Science and Technology (KAUST), Computational Bioscience Research Center, Thuwal, 23955-6900, Saudi Arabia.
| |
Collapse
|