201
|
Liu TYA, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of Optic Disc Abnormalities in Color Fundus Photographs Using Deep Learning. J Neuroophthalmol 2021; 41:368-374. [PMID: 34415271 PMCID: PMC10637344 DOI: 10.1097/wno.0000000000001358] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
BACKGROUND To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.
Collapse
Affiliation(s)
- T Y Alvin Liu
- Department of Ophthalmology (TYAL, NRM), Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering (JW), Johns Hopkins University, Baltimore, Maryland; Malone Center for Engineering in Healthcare (HZ, MU), Johns Hopkins University, Baltimore, Maryland; Department of Radiology (PHY, FKH), Johns Hopkins University, Baltimore, Maryland; Singapore Eye Research Institute (DSWT), Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore ; Department of Ophthalmology (PSS), University of Colorado School of Medicine, Aurora, Colorado; and Department of Ophthalmology (DM), Byers Eye Institute, Stanford University, Palo Alto, California
| | | | | | | | | | | | | | | | | | | |
Collapse
|
202
|
Foo LL, Ng WY, Lim GYS, Tan TE, Ang M, Ting DSW. Artificial intelligence in myopia: current and future trends. Curr Opin Ophthalmol 2021; 32:413-424. [PMID: 34310401 DOI: 10.1097/icu.0000000000000791] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
PURPOSE OF REVIEW Myopia is one of the leading causes of visual impairment, with a projected increase in prevalence globally. One potential approach to address myopia and its complications is early detection and treatment. However, current healthcare systems may not be able to cope with the growing burden. Digital technological solutions such as artificial intelligence (AI) have emerged as a potential adjunct for myopia management. RECENT FINDINGS There are currently four significant domains of AI in myopia, including machine learning (ML), deep learning (DL), genetics and natural language processing (NLP). ML has been demonstrated to be a useful adjunctive for myopia prediction and biometry for cataract surgery in highly myopic individuals. DL techniques, particularly convoluted neural networks, have been applied to various image-related diagnostic and predictive solutions. Applications of AI in genomics and NLP appear to be at a nascent stage. SUMMARY Current AI research is mainly focused on disease classification and prediction in myopia. Through greater collaborative research, we envision AI will play an increasingly critical role in big data analysis by aggregating a greater variety of parameters including genomics and environmental factors. This may enable the development of generalizable adjunctive DL systems that could help realize predictive and individualized precision medicine for myopic patients.
Collapse
Affiliation(s)
- Li Lian Foo
- Singapore National Eye Centre, Singapore Eye Research Institute
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore Eye Research Institute
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | | | - Tien-En Tan
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Marcus Ang
- Singapore National Eye Centre, Singapore Eye Research Institute
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| |
Collapse
|
203
|
Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TY, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol 2021; 32:459-467. [PMID: 34324454 PMCID: PMC10276657 DOI: 10.1097/icu.0000000000000794] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
PURPOSE OF REVIEW The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Wei Yan Ng
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Pearse A. Keane
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Leopold Schmetterer
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE)
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Clinical Pharmacology
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tien Yin Wong
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| |
Collapse
|
204
|
Nuzzi R, Boscia G, Marolo P, Ricardi F. The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review. Front Med (Lausanne) 2021; 8:710329. [PMID: 34527682 PMCID: PMC8437147 DOI: 10.3389/fmed.2021.710329] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 07/23/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology.
Collapse
Affiliation(s)
- Raffaele Nuzzi
- Ophthalmology Unit, A.O.U. City of Health and Science of Turin, Department of Surgical Sciences, University of Turin, Turin, Italy
| | | | | | | |
Collapse
|
205
|
Nikolaidou A, Tsaousis KT. Teleophthalmology and Artificial Intelligence As Game Changers in Ophthalmic Care After the COVID-19 Pandemic. Cureus 2021; 13:e16392. [PMID: 34408945 PMCID: PMC8363234 DOI: 10.7759/cureus.16392] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/13/2021] [Indexed: 12/17/2022] Open
Abstract
The current COVID-19 pandemic has boosted a sudden demand for telemedicine due to quarantine and travel restrictions. The exponential increase in the use of telemedicine is expected to affect ophthalmology drastically. The aim of this review is to discuss the utility, effectiveness and challenges of teleophthalmological new tools for eyecare delivery as well as its implementation and possible facilitation with artificial intelligence. We used the terms: “teleophthalmology,” “telemedicine and COVID-19,” “retinal diseases and telemedicine,” “virtual ophthalmology,” “cost effectiveness of teleophthalmology,” “pediatric teleophthalmology,” “Artificial intelligence and ophthalmology,” “Glaucoma and teleophthalmology” and “teleophthalmology limitations” in the database of PubMed and selected the articles being published in the course of 2015-2020. After the initial search, 321 articles returned as relevant. A meticulous screening followed and eventually 103 published manuscripts were included and used as our references. Emerging in the market, teleophthalmology is showing great potential for the future of ophthalmological care, benefiting both patients and ophthalmologists in times of pandemics. The spectrum of eye diseases that could benefit from teleophthalmology is wide, including mostly retinal diseases such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration but also glaucoma and anterior segment conditions. Simultaneously, artificial intelligence provides ways of implementing teleophthalmology easier and with better outcomes, contributing as significant changing factors for ophthalmology practice after the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anna Nikolaidou
- Ophthalmology, Aristotle University of Thessaloniki, Thessaloniki, GRC
| | | |
Collapse
|
206
|
Cen LP, Ji J, Lin JW, Ju ST, Lin HJ, Li TP, Wang Y, Yang JF, Liu YF, Tan S, Tan L, Li D, Wang Y, Zheng D, Xiong Y, Wu H, Jiang J, Wu Z, Huang D, Shi T, Chen B, Yang J, Zhang X, Luo L, Huang C, Zhang G, Huang Y, Ng TK, Chen H, Chen W, Pang CP, Zhang M. Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nat Commun 2021; 12:4828. [PMID: 34376678 PMCID: PMC8355164 DOI: 10.1038/s41467-021-25138-w] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 07/22/2021] [Indexed: 02/05/2023] Open
Abstract
Retinal fundus diseases can lead to irreversible visual impairment without timely diagnoses and appropriate treatments. Single disease-based deep learning algorithms had been developed for the detection of diabetic retinopathy, age-related macular degeneration, and glaucoma. Here, we developed a deep learning platform (DLP) capable of detecting multiple common referable fundus diseases and conditions (39 classes) by using 249,620 fundus images marked with 275,543 labels from heterogenous sources. Our DLP achieved a frequency-weighted average F1 score of 0.923, sensitivity of 0.978, specificity of 0.996 and area under the receiver operating characteristic curve (AUC) of 0.9984 for multi-label classification in the primary test dataset and reached the average level of retina specialists. External multihospital test, public data test and tele-reading application also showed high efficiency for multiple retinal diseases and conditions detection. These results indicate that our DLP can be applied for retinal fundus disease triage, especially in remote areas around the world.
Collapse
Affiliation(s)
- Ling-Ping Cen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network & Information Centre, Shantou University, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- XuanShi Med Tech (Shanghai) Company Limited, Shanghai, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Si-Tong Ju
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Hong-Jie Lin
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tai-Ping Li
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yun Wang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Feng Yang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yu-Fen Liu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Shaoying Tan
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Li Tan
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dongjie Li
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yifan Wang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dezhi Zheng
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yongqun Xiong
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Hanfu Wu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jingjing Jiang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zhenggen Wu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dingguo Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tingkun Shi
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Binyao Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jianling Yang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Xiaoling Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Li Luo
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chukai Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Guihua Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yuqiang Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Haoyu Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Weiqi Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Mingzhi Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China.
| |
Collapse
|
207
|
Accuracy of Deep Learning Algorithms for the Diagnosis of Retinopathy of Prematurity by Fundus Images: A Systematic Review and Meta-Analysis. J Ophthalmol 2021; 2021:8883946. [PMID: 34394982 PMCID: PMC8363465 DOI: 10.1155/2021/8883946] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 06/30/2021] [Accepted: 07/27/2021] [Indexed: 12/14/2022] Open
Abstract
Background Retinopathy of prematurity (ROP) occurs in preterm infants and may contribute to blindness. Deep learning (DL) models have been used for ophthalmologic diagnoses. We performed a systematic review and meta-analysis of published evidence to summarize and evaluate the diagnostic accuracy of DL algorithms for ROP by fundus images. Methods We searched PubMed, EMBASE, Web of Science, and Institute of Electrical and Electronics Engineers Xplore Digital Library on June 13, 2021, for studies using a DL algorithm to distinguish individuals with ROP of different grades, which provided accuracy measurements. The pooled sensitivity and specificity values and the area under the curve (AUC) of summary receiver operating characteristics curves (SROC) summarized overall test performance. The performances in validation and test datasets were assessed together and separately. Subgroup analyses were conducted between the definition and grades of ROP. Threshold and nonthreshold effects were tested to assess biases and evaluate accuracy factors associated with DL models. Results Nine studies with fifteen classifiers were included in our meta-analysis. A total of 521,586 objects were applied to DL models. For combined validation and test datasets in each study, the pooled sensitivity and specificity were 0.953 (95% confidence intervals (CI): 0.946-0.959) and 0.975 (0.973-0.977), respectively, and the AUC was 0.984 (0.978-0.989). For the validation dataset and test dataset, the AUC was 0.977 (0.968-0.986) and 0.987 (0.982-0.992), respectively. In the subgroup analysis of ROP vs. normal and differentiation of two ROP grades, the AUC was 0.990 (0.944-0.994) and 0.982 (0.964-0.999), respectively. Conclusions Our study shows that DL models can play an essential role in detecting and grading ROP with high sensitivity, specificity, and repeatability. The application of a DL-based automated system may improve ROP screening and diagnosis in the future.
Collapse
|
208
|
Nazir T, Nawaz M, Rashid J, Mahum R, Masood M, Mehmood A, Ali F, Kim J, Kwon HY, Hussain A. Detection of Diabetic Eye Disease from Retinal Images Using a Deep Learning Based CenterNet Model. SENSORS 2021; 21:s21165283. [PMID: 34450729 PMCID: PMC8398326 DOI: 10.3390/s21165283] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 07/22/2021] [Accepted: 07/23/2021] [Indexed: 02/06/2023]
Abstract
Diabetic retinopathy (DR) is an eye disease that alters the blood vessels of a person suffering from diabetes. Diabetic macular edema (DME) occurs when DR affects the macula, which causes fluid accumulation in the macula. Efficient screening systems require experts to manually analyze images to recognize diseases. However, due to the challenging nature of the screening method and lack of trained human resources, devising effective screening-oriented treatment is an expensive task. Automated systems are trying to cope with these challenges; however, these methods do not generalize well to multiple diseases and real-world scenarios. To solve the aforementioned issues, we propose a new method comprising two main steps. The first involves dataset preparation and feature extraction and the other relates to improving a custom deep learning based CenterNet model trained for eye disease classification. Initially, we generate annotations for suspected samples to locate the precise region of interest, while the other part of the proposed solution trains the Center Net model over annotated images. Specifically, we use DenseNet-100 as a feature extraction method on which the one-stage detector, CenterNet, is employed to localize and classify the disease lesions. We evaluated our method over challenging datasets, namely, APTOS-2019 and IDRiD, and attained average accuracy of 97.93% and 98.10%, respectively. We also performed cross-dataset validation with benchmark EYEPACS and Diaretdb1 datasets. Both qualitative and quantitative results demonstrate that our proposed approach outperforms state-of-the-art methods due to more effective localization power of CenterNet, as it can easily recognize small lesions and deal with over-fitted training data. Our proposed framework is proficient in correctly locating and classifying disease lesions. In comparison to existing DR and DME classification approaches, our method can extract representative key points from low-intensity and noisy images and accurately classify them. Hence our approach can play an important role in automated detection and recognition of DR and DME lesions.
Collapse
Affiliation(s)
- Tahira Nazir
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (T.N.); (M.N.); (R.M.); (M.M.); (A.M.); (F.A.)
| | - Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (T.N.); (M.N.); (R.M.); (M.M.); (A.M.); (F.A.)
| | - Junaid Rashid
- Department of Computer Science and Engineering, Kongju National University, Gongju 31080, Chungcheongnam-do, Korea;
- Correspondence: (J.R.); (H.-Y.K.)
| | - Rabbia Mahum
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (T.N.); (M.N.); (R.M.); (M.M.); (A.M.); (F.A.)
| | - Momina Masood
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (T.N.); (M.N.); (R.M.); (M.M.); (A.M.); (F.A.)
| | - Awais Mehmood
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (T.N.); (M.N.); (R.M.); (M.M.); (A.M.); (F.A.)
| | - Farooq Ali
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan; (T.N.); (M.N.); (R.M.); (M.M.); (A.M.); (F.A.)
| | - Jungeun Kim
- Department of Computer Science and Engineering, Kongju National University, Gongju 31080, Chungcheongnam-do, Korea;
| | - Hyuk-Yoon Kwon
- Research Center for Electrical and Information Technology, Department of Industrial Engineering, Seoul National University of Science and Technology, Seoul 01811, Korea
- Correspondence: (J.R.); (H.-Y.K.)
| | - Amir Hussain
- Centre of AI and Data Science, Edinburgh Napier University, Edinburgh EH11 4DY, UK;
| |
Collapse
|
209
|
Repka MX. A Revision of the International Classification of Retinopathy of Prematurity. Ophthalmology 2021; 128:1381-1383. [PMID: 34332760 DOI: 10.1016/j.ophtha.2021.07.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/09/2021] [Accepted: 07/09/2021] [Indexed: 11/20/2022] Open
|
210
|
Petibon Y, Fahey F, Cao X, Levin Z, Sexton-Stallone B, Falone A, Zukotynski K, Kwatra N, Lim R, Bar-Sever Z, Chemli Y, Treves ST, Fakhri GE, Ouyang J. Detecting lumbar lesions in 99m Tc-MDP SPECT by deep learning: Comparison with physicians. Med Phys 2021; 48:4249-4261. [PMID: 34101855 DOI: 10.1002/mp.15033] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 04/16/2021] [Accepted: 05/25/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE 99m Tc-MDP single-photon emission computed tomography (SPECT) is an established tool for diagnosing lumbar stress, a common cause of low back pain (LBP) in pediatric patients. However, detection of small stress lesions is complicated by the low quality of SPECT, leading to significant interreader variability. The study objectives were to develop an approach based on a deep convolutional neural network (CNN) for detecting lumbar lesions in 99m Tc-MDP scans and to compare its performance to that of physicians in a localization receiver operating characteristic (LROC) study. METHODS Sixty-five lesion-absent (LA) 99m Tc-MDP studies performed in pediatric patients for evaluating LBP were retrospectively identified. Projections for an artificial focal lesion were acquired separately by imaging a 99m Tc capillary tube at multiple distances from the collimator. An approach was developed to automatically insert lesions into LA scans to obtain realistic lesion-present (LP) 99m Tc-MDP images while ensuring knowledge of the ground truth. A deep CNN was trained using 2.5D views extracted in LP and LA 99m Tc-MDP image sets. During testing, the CNN was applied in a sliding-window fashion to compute a 3D "heatmap" reporting the probability of a lesion being present at each lumbar location. The algorithm was evaluated using cross-validation on a 99m Tc-MDP test dataset which was also studied by five physicians in a LROC study. LP images in the test set were obtained by incorporating lesions at sites selected by a physician based on clinical likelihood of injury in this population. RESULTS The deep learning (DL) system slightly outperformed human observers, achieving an area under the LROC curve (AUCLROC ) of 0.830 (95% confidence interval [CI]: [0.758, 0.924]) compared with 0.785 (95% CI: [0.738, 0.830]) for physicians. The AUCLROC for the DL system was higher than that of two readers (difference in AUCLROC [ΔAUCLROC ] = 0.049 and 0.053) who participated to the study and slightly lower than that of two other readers (ΔAUCLROC = -0.006 and -0.012). Another reader outperformed DL by a more substantial margin (ΔAUCLROC = -0.053). CONCLUSION The DL system provides comparable or superior performance than physicians in localizing small 99m Tc-MDP positive lumbar lesions.
Collapse
Affiliation(s)
- Yoann Petibon
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Frederic Fahey
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Xinhua Cao
- Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Zakhar Levin
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Briana Sexton-Stallone
- Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Anthony Falone
- Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Katherine Zukotynski
- Departments of Medicine and Radiology, McMaster University, Hamilton, Ontario, Canada
| | - Neha Kwatra
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Ruth Lim
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Zvi Bar-Sever
- Institute of Nuclear Medicine, Schneider Children's Medical Center of Israel, Petah Tikva, Israel
| | - Yanis Chemli
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - S Ted Treves
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Division of Nuclear Medicine and Molecular Imaging, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Georges El Fakhri
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Jinsong Ouyang
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
211
|
Agrawal R, Kulkarni S, Walambe R, Kotecha K. Assistive Framework for Automatic Detection of All the Zones in Retinopathy of Prematurity Using Deep Learning. J Digit Imaging 2021; 34:932-947. [PMID: 34240273 DOI: 10.1007/s10278-021-00477-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 05/06/2021] [Accepted: 05/21/2021] [Indexed: 11/30/2022] Open
Abstract
Retinopathy of prematurity (ROP) is a potentially blinding disorder seen in low birth weight preterm infants. In India, the burden of ROP is high, with nearly 200,000 premature infants at risk. Early detection through screening and treatment can prevent this blindness. The automatic screening systems developed so far can detect "severe ROP" or "plus disease," but this information does not help schedule follow-up. Identifying vascularized retinal zones and detecting the ROP stage is essential for follow-up or discharge from screening. There is no automatic system to assist these crucial decisions to the best of the authors' knowledge. The low contrast of images, incompletely developed vessels, macular structure, and lack of public data sets are a few challenges in creating such a system. In this paper, a novel method using an ensemble of "U-Network" and "Circle Hough Transform" is developed to detect zones I, II, and III from retinal images in which macula is not developed. The model developed is generic and trained on mixed images of different sizes. It detects zones in images of variable sizes captured by two different imaging systems with an accuracy of 98%. All images of the test set (including the low-quality images) are considered. The time taken for training was only 14 min, and a single image was tested in 30 ms. The present study can help medical experts interpret retinal vascular status correctly and reduce subjective variation in diagnosis.
Collapse
Affiliation(s)
- Ranjana Agrawal
- School of Computer Engineering and Technology, Dr. Vishwanath Karad MIT World Peace University, Pune, India.,Symbiosis Institute of Technology, Symbiosis International (Deemed) University, Pune, India
| | | | - Rahee Walambe
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International (Deemed) University, Pune, India.
| | - Ketan Kotecha
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International (Deemed) University, Pune, India.
| |
Collapse
|
212
|
Campbell JP, Kim SJ, Brown JM, Ostmo S, Chan RVP, Kalpathy-Cramer J, Chiang MF. Evaluation of a Deep Learning-Derived Quantitative Retinopathy of Prematurity Severity Scale. Ophthalmology 2021; 128:1070-1076. [PMID: 33121959 PMCID: PMC8076329 DOI: 10.1016/j.ophtha.2020.10.025] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 09/30/2020] [Accepted: 10/20/2020] [Indexed: 11/20/2022] Open
Abstract
PURPOSE To evaluate the clinical usefulness of a quantitative deep learning-derived vascular severity score for retinopathy of prematurity (ROP) by assessing its correlation with clinical ROP diagnosis and by measuring clinician agreement in applying a novel scale. DESIGN Analysis of existing database of posterior pole fundus images and corresponding ophthalmoscopic examinations using 2 methods of assigning a quantitative scale to vascular severity. PARTICIPANTS Images were from clinical examinations of patients in the Imaging and Informatics in ROP Consortium. Four ophthalmologists and 1 study coordinator evaluated vascular severity on a scale from 1 to 9. METHODS A quantitative vascular severity score (1-9) was applied to each image using a deep learning algorithm. A database of 499 images was developed for assessment of interobserver agreement. MAIN OUTCOME MEASURES Distribution of deep learning-derived vascular severity scores with the clinical assessment of zone (I, II, or III), stage (0, 1, 2, or 3), and extent (<3 clock hours, 3-6 clock hours, and >6 clock hours) of stage 3 evaluated using multivariate linear regression and weighted κ values and Pearson correlation coefficients for interobserver agreement on a 1-to-9 vascular severity scale. RESULTS For deep learning analysis, a total of 6344 clinical examinations were analyzed. A higher deep learning-derived vascular severity score was associated with more posterior disease, higher disease stage, and higher extent of stage 3 disease (P < 0.001 for all). For a given ROP stage, the vascular severity score was higher in zone I than zones II or III (P < 0.001). Multivariate regression found zone, stage, and extent all were associated independently with the severity score (P < 0.001 for all). For interobserver agreement, the mean ± standard deviation weighted κ value was 0.67 ± 0.06, and the Pearson correlation coefficient ± standard deviation was 0.88 ± 0.04 on the use of a 1-to-9 vascular severity scale. CONCLUSIONS A vascular severity scale for ROP seems feasible for clinical adoption; corresponds with zone, stage, extent of stage 3, and plus disease; and facilitates the use of objective technology such as deep learning to improve the consistency of ROP diagnosis.
Collapse
Affiliation(s)
- J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Sang Jin Kim
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - James M Brown
- School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Massachusetts General Hospital and Brigham and Women's Hospital Center for Clinical Data Science, Boston, Massachusetts
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
213
|
Zheng C, Koh V, Bian F, Li L, Xie X, Wang Z, Yang J, Chew PTK, Zhang M. Semi-supervised generative adversarial networks for closed-angle detection on anterior segment optical coherence tomography images: an empirical study with a small training dataset. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:1073. [PMID: 34422985 PMCID: PMC8339863 DOI: 10.21037/atm-20-7436] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 03/17/2021] [Indexed: 02/05/2023]
Abstract
BACKGROUND Semi-supervised learning algorithms can leverage an unlabeled dataset when labeling is limited or expensive to obtain. In the current study, we developed and evaluated a semi-supervised generative adversarial networks (GANs) model that detects closed-angle on anterior segment optical coherence tomography (AS-OCT) images using a small labeled dataset. METHODS In this cross-sectional study, a semi-supervised GANs model was developed for automatic closed-angle detection training on a small labeled and large unsupervised training dataset collected from the Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong (JSIEC). The closed-angle was defined as iris-trabecular contact beyond the scleral spur in AS-OCT images. We further developed two supervised deep learning (DL) models training on the same supervised dataset and the whole dataset separately. The semi-supervised GANs model and supervised DL models' performance were compared on two independent testing datasets from JSIEC (515 images) and the Department of Ophthalmology (84 images), National University Health System, respectively. The diagnostic performance was assessed by evaluation matrices, including the accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). RESULTS For closed-angle detection using clinician grading of AS-OCT imaging as the reference standard, the semi-supervised GANs model showed comparable performance, with AUCs of 0.97 (95% CI, 0.96-0.99) and 0.98 (95% CI, 0.94-1.00), compared with the supervised DL model (using the whole dataset) [AUCs of 0.97 (95% CI, 0.96-0.99), and 0.97 (95% CI, 0.94-1.00)]. When training on the same small supervised dataset, the semi-supervised GANs achieved performance at least as well as, if not better than, the supervised DL model [AUCs of 0.90 (95% CI: 0.84-0.96), and 0.92 (95% CI, 0.86-0.97)]. CONCLUSIONS The semi-supervised GANs method achieves diagnostic performance at least as good as a supervised DL model when trained on small labeled datasets. Further development of semi-supervised learning methods could be useful within clinical and research settings. TRIAL REGISTRATION NUMBER ChiCTR2000037892.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China, Shanghai, China
| | - Victor Koh
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Fang Bian
- Department of Ophthalmology, Deyang People’s Hospital, Deyang, China
| | - Luo Li
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| | - Zilei Wang
- Shanghai Children’s Hospital, Shanghai, China
| | - Jianlong Yang
- Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, China
| | - Paul Tec Kuan Chew
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| |
Collapse
|
214
|
Deep Learning and Transfer Learning for Optic Disc Laterality Detection: Implications for Machine Learning in Neuro-Ophthalmology. J Neuroophthalmol 2021; 40:178-184. [PMID: 31453913 DOI: 10.1097/wno.0000000000000827] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
BACKGROUND Deep learning (DL) has demonstrated human expert levels of performance for medical image classification in a wide array of medical fields, including ophthalmology. In this article, we present the results of our DL system designed to determine optic disc laterality, right eye vs left eye, in the presence of both normal and abnormal optic discs. METHODS Using transfer learning, we modified the ResNet-152 deep convolutional neural network (DCNN), pretrained on ImageNet, to determine the optic disc laterality. After a 5-fold cross-validation, we generated receiver operating characteristic curves and corresponding area under the curve (AUC) values to evaluate performance. The data set consisted of 576 color fundus photographs (51% right and 49% left). Both 30° photographs centered on the optic disc (63%) and photographs with varying degree of optic disc centration and/or wider field of view (37%) were included. Both normal (27%) and abnormal (73%) optic discs were included. Various neuro-ophthalmological diseases were represented, such as, but not limited to, atrophy, anterior ischemic optic neuropathy, hypoplasia, and papilledema. RESULTS Using 5-fold cross-validation (70% training; 10% validation; 20% testing), our DCNN for classifying right vs left optic disc achieved an average AUC of 0.999 (±0.002) with optimal threshold values, yielding an average accuracy of 98.78% (±1.52%), sensitivity of 98.60% (±1.72%), and specificity of 98.97% (±1.38%). When tested against a separate data set for external validation, our 5-fold cross-validation model achieved the following average performance: AUC 0.996 (±0.005), accuracy 97.2% (±2.0%), sensitivity 96.4% (±4.3%), and specificity 98.0% (±2.2%). CONCLUSIONS Small data sets can be used to develop high-performing DL systems for semantic labeling of neuro-ophthalmology images, specifically in distinguishing between right and left optic discs, even in the presence of neuro-ophthalmological pathologies. Although this may seem like an elementary task, this study demonstrates the power of transfer learning and provides an example of a DCNN that can help curate large medical image databases for machine-learning purposes and facilitate ophthalmologist workflow by automatically labeling images according to laterality.
Collapse
|
215
|
Valikodath NG, Cole E, Ting DSW, Campbell JP, Pasquale LR, Chiang MF, Chan RVP. Impact of Artificial Intelligence on Medical Education in Ophthalmology. Transl Vis Sci Technol 2021; 10:14. [PMID: 34125146 PMCID: PMC8212436 DOI: 10.1167/tvst.10.7.14] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Clinical care in ophthalmology is rapidly evolving as artificial intelligence (AI) algorithms are being developed. The medical community and national and federal regulatory bodies are recognizing the importance of adapting to AI. However, there is a gap in physicians’ understanding of AI and its implications regarding its potential use in clinical care, and there are limited resources and established programs focused on AI and medical education in ophthalmology. Physicians are essential in the application of AI in a clinical context. An AI curriculum in ophthalmology can help provide physicians with a fund of knowledge and skills to integrate AI into their practice. In this paper, we provide general recommendations for an AI curriculum for medical students, residents, and fellows in ophthalmology.
Collapse
Affiliation(s)
- Nita G Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - Emily Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai Hospital, New York, NY, USA
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | | |
Collapse
|
216
|
de Figueiredo LA, Dias JVP, Polati M, Carricondo PC, Debert I. Strabismus and Artificial Intelligence App: Optimizing Diagnostic and Accuracy. Transl Vis Sci Technol 2021; 10:22. [PMID: 34137838 PMCID: PMC8212438 DOI: 10.1167/tvst.10.7.22] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Clinical evaluation of eye versions plays an important role in the diagnosis of special strabismus. Despite the importance of versions, they are not standardized in clinical practice because they are subjective. Assuming that objectivity confers accuracy, this research aims to create an artificial intelligence app that can classify the eye versions into nine positions of gaze. Methods We analyzed photos of 110 strabismus patients from an outpatient clinic of a tertiary hospital at nine gazes. For each photo, the gaze was identified, and the corresponding version was rated by the same examiner during patient evaluation. Results The images were standardized by using the OpenCV library in Python language, so that the patient's eyes were located and sent to a multilabel model through the Keras framework regardless of the photo orientation. Then, the model was trained for each combination of the following groupings: eyes (left, right), gaze (1 to 9), and version (-4 to 4). Resnet50 was used as the neural network architecture, and the Data Augmentation technique was applied. For quick inference via web browser, the SteamLit app framework was employed. For use in Mobiles, the finished model was exported for use in through the Tensorflow Lite converter. Conclusions The results showed that the mobile app might be applied to complement evaluation of ocular motility based on objective classification of ocular versions. However, further exploratory research and validations are required. Translational Relevance Apart from the traditional clinical practice method, professionals will be able to envisage an easy-to-apply support app, to increase diagnostic accuracy.
Collapse
Affiliation(s)
| | | | - Mariza Polati
- Department of Strabismus, Hospital das Clínicas, University of Sao Paulo, Brazil
| | | | - Iara Debert
- Department of Strabismus, Hospital das Clínicas, University of Sao Paulo, Brazil
| |
Collapse
|
217
|
Ye Y, Sun WW, Xu RX, Selmic LE, Sun M. Intraoperative assessment of canine soft tissue sarcoma by deep learning enhanced optical coherence tomography. Vet Comp Oncol 2021; 19:624-631. [PMID: 34173314 DOI: 10.1111/vco.12747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 06/02/2021] [Accepted: 06/02/2021] [Indexed: 11/27/2022]
Abstract
Soft tissue sarcoma (STS) is a locally aggressive and infiltrative tumour in dogs. Surgical resection is the treatment of choice for local tumour control. Currently, post-operative pathology is performed for surgical margin assessment. Spectral-domain optical coherence tomography (OCT) has recently been evaluated for its value for surgical margin assessment in some tumour types in dogs. The purpose of this study was to develop an automatic diagnosis system that can assist clinicians in real-time for OCT image interpretation of tissues at surgical margins. We utilized a ResNet-50 network to classify healthy and cancerous tissues. A patch-based approach was adopted to achieve accurate classification with limited training data (80 cancer images, 80 normal images) and the validation set (20 cancer images, 20 normal images). The proposed method achieved an average accuracy of 97.1% with an excellent sensitivity of 94.3% on the validation set; the quadratic weighted κ was 0.94 for the STS diagnosis. In an independent test data set of 20 OCT images (10 cancer images, 10 normal images), the proposed method correctly differentiated all the STS images. Furthermore, we proposed a diagnostic curve, which could be evaluated in real-time to assist clinicians in detecting the specific location of a lesion. In short, the proposed method is accurate, operates in real-time and is non-invasive, which could be helpful for future surgical guidance.
Collapse
Affiliation(s)
- Yu Ye
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, China
| | - Weihong William Sun
- Department of Biomedical Engineering, The Ohio State University, Columbus, Ohio, USA
| | - Ronald X Xu
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, China.,Department of Biomedical Engineering, The Ohio State University, Columbus, Ohio, USA
| | - Laura E Selmic
- Department of Veterinary Clinical Sciences, The Ohio State University, Columbus, Ohio, USA
| | - Mingzhai Sun
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
218
|
Aoyama Y, Maruko I, Kawano T, Yokoyama T, Ogawa Y, Maruko R, Iida T. Diagnosis of central serous chorioretinopathy by deep learning analysis of en face images of choroidal vasculature: A pilot study. PLoS One 2021; 16:e0244469. [PMID: 34143775 PMCID: PMC8213187 DOI: 10.1371/journal.pone.0244469] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 05/12/2021] [Indexed: 12/12/2022] Open
Abstract
Purpose To diagnose central serous chorioretinopathy (CSC) by deep learning (DL) analyses of en face images of the choroidal vasculature obtained by optical coherence tomography (OCT) and to analyze the regions of interest for the DL from heatmaps. Methods One-hundred eyes were studied; 53 eyes with CSC and 47 normal eyes. Volume scans of 12×12 mm square were obtained at the same time as the OCT angiographic (OCTA) scans (Plex Elite 9000 Swept-Source OCT®, Zeiss). High-quality en face images of the choroidal vasculature of the segmentation slab of one-half of the subfoveal choroidal thickness were created for the analyses. The 100 en face images were divided into 80 for training and 20 for validation. Thus, we divided it into five groups of 20 eyes each, trained the remaining 80 eyes in each group, and then calculated the correct answer rate for each group by validation with 20 eyes. The Neural Network Console (NNC) developed by Sony and the Keras-Tensorflow backend developed by Google were used as the software for the classification with 16 layers of convolutional neural networks. The active region of the heatmap based on the feature quantity extracted by DL was also evaluated as the percentages with gradient-weighted class activation mapping implemented in Keras. Results The mean accuracy rate of the validation was 95% for NNC and 88% for Keras. This difference was not significant (P >0.1). The mean active region in the heatmap image was 12.5% in CSC eyes which was significantly lower than the 79.8% in normal eyes (P<0.01). Conclusions CSC can be automatically diagnosed by DL with high accuracy from en face images of the choroidal vasculature with different programs, convolutional layer structures, and small data sets. Heatmap analyses showed that the DL focused on the area occupied by the choroidal vessels and their uniformity. We conclude that DL can help in the diagnosis of CSC.
Collapse
Affiliation(s)
- Yukihiro Aoyama
- Department of Ophthalmology, Tokyo Women's Medical University, Shinjuku, Tokyo, Japan
| | - Ichiro Maruko
- Department of Ophthalmology, Tokyo Women's Medical University, Shinjuku, Tokyo, Japan
| | - Taizo Kawano
- Department of Ophthalmology, Tokyo Women's Medical University, Shinjuku, Tokyo, Japan
| | - Tatsuro Yokoyama
- Department of Ophthalmology, Tokyo Women's Medical University, Shinjuku, Tokyo, Japan
| | - Yuki Ogawa
- Department of Ophthalmology, Tokyo Women's Medical University, Shinjuku, Tokyo, Japan
| | - Ruka Maruko
- Department of Ophthalmology, Tokyo Women's Medical University, Shinjuku, Tokyo, Japan
| | - Tomohiro Iida
- Department of Ophthalmology, Tokyo Women's Medical University, Shinjuku, Tokyo, Japan
| |
Collapse
|
219
|
Hassan S, Dhali M, Zaman F, Tanveer M. Big data and predictive analytics in healthcare in Bangladesh: regulatory challenges. Heliyon 2021; 7:e07179. [PMID: 34141936 PMCID: PMC8188364 DOI: 10.1016/j.heliyon.2021.e07179] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 03/20/2021] [Accepted: 05/27/2021] [Indexed: 12/23/2022] Open
Abstract
Big data analytics and artificial intelligence are revolutionizing the global healthcare industry. As the world accumulates unfathomable volumes of data and health technology grows more and more critical to the advancement of medicine, policymakers and regulators are faced with tough challenges around data security and data privacy. This paper reviews existing regulatory frameworks for artificial intelligence-based medical devices and health data privacy in Bangladesh. The study is legal research employing a comparative approach where data is collected from primary and secondary legal materials and filtered based on policies relating to medical data privacy and medical device regulation of Bangladesh. Such policies are then compared with benchmark policies of the European Union and the USA to test the adequacy of the present regulatory framework of Bangladesh and identify the gaps in the current regulation. The study highlights the gaps in policy and regulation in Bangladesh that are hampering the widespread adoption of big data analytics and artificial intelligence in the industry. Despite the vast benefits that big data would bring to Bangladesh's healthcare industry, it lacks the proper data governance and legal framework necessary to gain consumer trust and move forward. Policymakers and regulators must work collaboratively with clinicians, patients and industry to adopt a new regulatory framework that harnesses the potential of big data but ensures adequate privacy and security of personal data. The article opens valuable insight to regulators, academicians, researchers and legal practitioners regarding the present regulatory loopholes in Bangladesh involving exploiting the promise of big data in the medical field. The study concludes with the recommendation for future research into the area of privacy as it relates to artificial intelligence-based medical devices should consult the patients' perspective by employing quantitative analysis research methodology.
Collapse
Affiliation(s)
- Shafiqul Hassan
- College of Law, Prince Sultan University, Prince Nasser Bin Farhan St, Salah Ad Din, Riyadh 12435, Saudi Arabia
| | - Mohsin Dhali
- College of Law, Prince Sultan University, Prince Nasser Bin Farhan St, Salah Ad Din, Riyadh 12435, Saudi Arabia
| | - Fazluz Zaman
- Department of Business and Law, Federation University Australia, 154-158 Sussex St, Sydney NSW 2000, Australia
| | - Muhammad Tanveer
- Prince Sultan University, Prince Nasser Bin Farhan St, Salah Ad Din, Riyadh 12435, Saudi Arabia
| |
Collapse
|
220
|
Vasseneix C, Najjar RP, Xu X, Tang Z, Loo JL, Singhal S, Tow S, Milea L, Ting DSW, Liu Y, Wong TY, Newman NJ, Biousse V, Milea D. Accuracy of a Deep Learning System for Classification of Papilledema Severity on Ocular Fundus Photographs. Neurology 2021; 97:e369-e377. [PMID: 34011570 DOI: 10.1212/wnl.0000000000012226] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 04/19/2021] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE To evaluate the performance of a deep learning system (DLS) in classifying the severity of papilledema associated with increased intracranial pressure on standard retinal fundus photographs. METHODS A DLS was trained to automatically classify papilledema severity in 965 patients (2,103 mydriatic fundus photographs), representing a multiethnic cohort of patients with confirmed elevated intracranial pressure. Training was performed on 1,052 photographs with mild/moderate papilledema (MP) and 1,051 photographs with severe papilledema (SP) classified by a panel of experts. The performance of the DLS and that of 3 independent neuro-ophthalmologists were tested in 111 patients (214 photographs, 92 with MP and 122 with SP) by calculating the area under the receiver operating characteristics curve (AUC), accuracy, sensitivity, and specificity. Kappa agreement scores between the DLS and each of the 3 graders and among the 3 graders were calculated. RESULTS The DLS successfully discriminated between photographs of MP and SP, with an AUC of 0.93 (95% confidence interval [CI] 0.89-0.96) and an accuracy, sensitivity, and specificity of 87.9%, 91.8%, and 86.2%, respectively. This performance was comparable with that of the 3 neuro-ophthalmologists (84.1%, 91.8%, and 73.9%, p = 0.19, p = 1, p = 0.09, respectively). Misclassification by the DLS was mainly observed for moderate papilledema (Frisén grade 3). Agreement scores between the DLS and the neuro-ophthalmologists' evaluation was 0.62 (95% CI 0.57-0.68), whereas the intergrader agreement among the 3 neuro-ophthalmologists was 0.54 (95% CI 0.47-0.62). CONCLUSIONS Our DLS accurately classified the severity of papilledema on an independent set of mydriatic fundus photographs, achieving a comparable performance with that of independent neuro-ophthalmologists. CLASSIFICATION OF EVIDENCE This study provides Class II evidence that a DLS using mydriatic retinal fundus photographs accurately classified the severity of papilledema associated in patients with a diagnosis of increased intracranial pressure.
Collapse
Affiliation(s)
- Caroline Vasseneix
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Raymond P Najjar
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Xinxing Xu
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Zhiqun Tang
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Jing Liang Loo
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Shweta Singhal
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Sharon Tow
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Leonard Milea
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark.
| | - Daniel Shu Wei Ting
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Yong Liu
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Tien Y Wong
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Nancy J Newman
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Valerie Biousse
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark
| | - Dan Milea
- From the Singapore Eye Research Institute (C.V., R.P.N., Z.T., J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); Duke-NUS Medical School (R.P.N., J.L.L., S.S., S.T., T.Y.W., D.M.); Institute of High Performance Computing (X.X., Y.L.), Agency for Science, Technology and Research (A*STAR); Singapore National Eye Centre (J.L.L., S.S., S.T., D.S.W.T., T.Y.W., D.M.); University of Berkeley (L.M.), CA; Departments of Ophthalmology and Neurology (N.J.N., V.B.), Emory University School of Medicine, Atlanta, GA; and Copenhagen University Hospital (D.M.), Denmark.
| | | |
Collapse
|
221
|
Gupta K, Reddy S. Heart, Eye, and Artificial Intelligence: A Review. Cardiol Res 2021; 12:132-139. [PMID: 34046105 PMCID: PMC8139752 DOI: 10.14740/cr1179] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 11/12/2020] [Indexed: 12/30/2022] Open
Abstract
Heart disease continues to be the leading cause of death in the USA. Deep learning-based artificial intelligence (AI) methods have become increasingly common in studying the various factors involved in cardiovascular disease. The usage of retinal scanning techniques to diagnose retinal diseases, such as diabetic retinopathy, age-related macular degeneration, glaucoma and others, using fundus photographs and optical coherence tomography angiography (OCTA) has been extensively documented. Researchers are now looking to combine the power of AI with the non-invasive ease of retinal scanning to examine the workings of the heart and predict changes in the macrovasculature based on microvascular features and function. In this review, we summarize the current state of the field in using retinal imaging to diagnose cardiovascular issues and other diseases.
Collapse
Affiliation(s)
- Kush Gupta
- Kasturba Medical College, Mangalore, India
| | | |
Collapse
|
222
|
Dutt S, Sivaraman A, Savoy F, Rajalakshmi R. Insights into the growing popularity of artificial intelligence in ophthalmology. Indian J Ophthalmol 2021; 68:1339-1346. [PMID: 32587159 PMCID: PMC7574057 DOI: 10.4103/ijo.ijo_1754_19] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) in healthcare is the use of computer-algorithms in analyzing complex medical data to detect associations and provide diagnostic support outputs. AI and deep learning (DL) find obvious applications in fields like ophthalmology wherein huge amount of image-based data need to be analyzed; however, the outcomes related to image recognition are reasonably well-defined. AI and DL have found important roles in ophthalmology in early screening and detection of conditions such as diabetic retinopathy (DR), age-related macular degeneration (ARMD), retinopathy of prematurity (ROP), glaucoma, and other ocular disorders, being successful inroads as far as early screening and diagnosis are concerned and appear promising with advantages of high-screening accuracy, consistency, and scalability. AI algorithms need equally skilled manpower, trained optometrists/ophthalmologists (annotators) to provide accurate ground truth for training the images. The basis of diagnoses made by AI algorithms is mechanical, and some amount of human intervention is necessary for further interpretations. This review was conducted after tracing the history of AI in ophthalmology across multiple research databases and aims to summarise the journey of AI in ophthalmology so far, making a close observation of most of the crucial studies conducted. This article further aims to highlight the potential impact of AI in ophthalmology, the pitfalls, and how to optimally use it to the maximum benefits of the ophthalmologists, the healthcare systems and the patients, alike.
Collapse
Affiliation(s)
- Sreetama Dutt
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Anand Sivaraman
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Florian Savoy
- Department of Artificial Intelligence, Medios Technologies, Singapore
| | - Ramachandran Rajalakshmi
- Department of Ophthalmology, Dr. Mohan's Diabetes Specialities Centre Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| |
Collapse
|
223
|
Wang J, Ji J, Zhang M, Lin JW, Zhang G, Gong W, Cen LP, Lu Y, Huang X, Huang D, Li T, Ng TK, Pang CP. Automated Explainable Multidimensional Deep Learning Platform of Retinal Images for Retinopathy of Prematurity Screening. JAMA Netw Open 2021; 4:e218758. [PMID: 33950206 PMCID: PMC8100867 DOI: 10.1001/jamanetworkopen.2021.8758] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 02/17/2021] [Indexed: 02/05/2023] Open
Abstract
Importance A retinopathy of prematurity (ROP) diagnosis currently relies on indirect ophthalmoscopy assessed by experienced ophthalmologists. A deep learning algorithm based on retinal images may facilitate early detection and timely treatment of ROP to improve visual outcomes. Objective To develop a retinal image-based, multidimensional, automated, deep learning platform for ROP screening and validate its performance accuracy. Design, Setting, and Participants A total of 14 108 eyes of 8652 preterm infants who received ROP screening from 4 centers from November 4, 2010, to November 14, 2019, were included, and a total of 52 249 retinal images were randomly split into training, validation, and test sets. Four main dimensional independent classifiers were developed, including image quality, any stage of ROP, intraocular hemorrhage, and preplus/plus disease. Referral-warranted ROP was automatically generated by integrating the results of 4 classifiers at the image, eye, and patient levels. DeepSHAP, a method based on DeepLIFT and Shapley values (solution concepts in cooperative game theory), was adopted as the heat map technology to explain the predictions. The performance of the platform was further validated as compared with that of the experienced ROP experts. Data were analyzed from February 12, 2020, to June 24, 2020. Exposure A deep learning algorithm. Main Outcomes and Measures The performance of each classifier included true negative, false positive, false negative, true positive, F1 score, sensitivity, specificity, receiver operating characteristic, area under curve (AUC), and Cohen unweighted κ. Results A total of 14 108 eyes of 8652 preterm infants (mean [SD] gestational age, 32.9 [3.1] weeks; 4818 boys [60.4%] of 7973 with known sex) received ROP screening. The performance of all classifiers achieved an F1 score of 0.718 to 0.981, a sensitivity of 0.918 to 0.982, a specificity of 0.949 to 0.992, and an AUC of 0.983 to 0.998, whereas that of the referral system achieved an F1 score of 0.898 to 0.956, a sensitivity of 0.981 to 0.986, a specificity of 0.939 to 0.974, and an AUC of 0.9901 to 0.9956. Fine-grained and class-discriminative heat maps were generated by DeepSHAP in real time. The platform achieved a Cohen unweighted κ of 0.86 to 0.98 compared with a Cohen κ of 0.93 to 0.98 by the ROP experts. Conclusions and Relevance In this diagnostic study, an automated ROP screening platform was able to identify and classify multidimensional pathologic lesions in the retinal images. This platform may be able to assist routine ROP screening in general and children hospitals.
Collapse
Affiliation(s)
- Ji Wang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network and Information Center, Shantou University, Shantou, Guangdong, China
- XuanShi Med Tech (Shanghai) Company Limited, Shanghai, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Guihua Zhang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Weifen Gong
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Ling-Ping Cen
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yamei Lu
- Department of Ophthalmology, The Sixth Affiliated Hospital of Guangzhou Medical University, Qingyuan People’s Hospital, Qingyuan, Guangdong, China
| | - Xuelin Huang
- Department of Ophthalmology, Guangdong Women and Children Hospital, Guangzhou, Guangdong, China
| | - Dingguo Huang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Taiping Li
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
224
|
Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, Sim DA, Thomas PBM, Lin H, Chen Y, Sakomoto T, Loewenstein A, Lam DSC, Pasquale LR, Wong TY, Lam LA, Ting DSW. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog Retin Eye Res 2021; 82:100900. [PMID: 32898686 PMCID: PMC7474840 DOI: 10.1016/j.preteyeres.2020.100900] [Citation(s) in RCA: 261] [Impact Index Per Article: 65.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/29/2022]
Abstract
The simultaneous maturation of multiple digital and telecommunications technologies in 2020 has created an unprecedented opportunity for ophthalmology to adapt to new models of care using tele-health supported by digital innovations. These digital innovations include artificial intelligence (AI), 5th generation (5G) telecommunication networks and the Internet of Things (IoT), creating an inter-dependent ecosystem offering opportunities to develop new models of eye care addressing the challenges of COVID-19 and beyond. Ophthalmology has thrived in some of these areas partly due to its many image-based investigations. Tele-health and AI provide synchronous solutions to challenges facing ophthalmologists and healthcare providers worldwide. This article reviews how countries across the world have utilised these digital innovations to tackle diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, glaucoma, refractive error correction, cataract and other anterior segment disorders. The review summarises the digital strategies that countries are developing and discusses technologies that may increasingly enter the clinical workflow and processes of ophthalmologists. Furthermore as countries around the world have initiated a series of escalating containment and mitigation measures during the COVID-19 pandemic, the delivery of eye care services globally has been significantly impacted. As ophthalmic services adapt and form a "new normal", the rapid adoption of some of telehealth and digital innovation during the pandemic is also discussed. Finally, challenges for validation and clinical implementation are considered, as well as recommendations on future directions.
Collapse
Affiliation(s)
- Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Hanruo Liu
- Beijing Tongren Hospital; Capital Medical University; Beijing Institute of Ophthalmology; Beijing, China
| | - Darren S J Ting
- Academic Ophthalmology, University of Nottingham, United Kingdom
| | - Sohee Jeon
- Keye Eye Center, Seoul, Republic of Korea
| | | | - Judy E Kim
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Haotian Lin
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Guangzhou, China
| | - Youxin Chen
- Peking Union Medical College Hospital, Beijing, China
| | - Taiji Sakomoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Japan
| | | | - Dennis S C Lam
- C-MER Dennis Lam Eye Center, C-Mer International Eye Care Group Limited, Hong Kong, Hong Kong; International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Tien Y Wong
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore
| | - Linda A Lam
- USC Roski Eye Institute, University of Southern California (USC) Keck School of Medicine, Los Angeles, CA, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore.
| |
Collapse
|
225
|
Ramessur R, Raja L, Kilduff CLS, Kang S, Li JPO, Thomas PBM, Sim DA. Impact and Challenges of Integrating Artificial Intelligence and Telemedicine into Clinical Ophthalmology. Asia Pac J Ophthalmol (Phila) 2021; 10:317-327. [PMID: 34383722 DOI: 10.1097/apo.0000000000000406] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
ABSTRACT Aging populations and worsening burden of chronic, treatable disease is increasingly creating a global shortfall in ophthalmic care provision. Remote and automated systems carry the promise to expand the scale and potential of health care interventions, and reduce strain on health care services through safe, personalized, efficient, and cost-effective services. However, significant challenges remain. Forward planning in service design is paramount to safeguard patient safety, trust in digital services, data privacy, medico-legal implications, and digital exclusion. We explore the impact and challenges facing patients and clinicians in integrating AI and telemedicine into ophthalmic care-and how these may influence its direction.
Collapse
Affiliation(s)
- Rishi Ramessur
- Royal Free Hospital, Royal Free London NHS Foundation Trust, London, United Kingdom
| | - Laxmi Raja
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Caroline L S Kilduff
- Central Middlesex Hospital, London North West University Healthcare NHS Trust, London, United Kingdom
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|
226
|
Tseng RMWW, Gunasekeran DV, Tan SSH, Rim TH, Lum E, Tan GSW, Wong TY, Tham YC. Considerations for Artificial Intelligence Real-World Implementation in Ophthalmology: Providers' and Patients' Perspectives. Asia Pac J Ophthalmol (Phila) 2021; 10:299-306. [PMID: 34383721 DOI: 10.1097/apo.0000000000000400] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
ABSTRACT Artificial Intelligence (AI), in particular deep learning, has made waves in the health care industry, with several prominent examples shown in ophthalmology. Despite the burgeoning reports on the development of new AI algorithms for detection and management of various eye diseases, few have reached the stage of regulatory approval for real-world implementation. To better enable real-world translation of AI systems, it is important to understand the demands, needs, and concerns of both health care professionals and patients, as providers and recipients of clinical care are impacted by these solutions. This review outlines the advantages and concerns of incorporating AI in ophthalmology care delivery, from both the providers' and patients' perspectives, and the key enablers for seamless transition to real-world implementation.
Collapse
Affiliation(s)
| | - Dinesh Visva Gunasekeran
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore (NUS), Singapore
| | | | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | | | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
227
|
Dong L, Yang Q, Zhang RH, Wei WB. Artificial intelligence for the detection of age-related macular degeneration in color fundus photographs: A systematic review and meta-analysis. EClinicalMedicine 2021; 35:100875. [PMID: 34027334 PMCID: PMC8129891 DOI: 10.1016/j.eclinm.2021.100875] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/14/2021] [Accepted: 04/15/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Age-related macular degeneration (AMD) is one of the leading causes of vision loss in the elderly population. The application of artificial intelligence (AI) provides convenience for the diagnosis of AMD. This systematic review and meta-analysis aimed to quantify the performance of AI in detecting AMD in fundus photographs. METHODS We searched PubMed, Embase, Web of Science and the Cochrane Library before December 31st, 2020 for studies reporting the application of AI in detecting AMD in color fundus photographs. Then, we pooled the data for analysis. PROSPERO registration number: CRD42020197532. FINDINGS 19 studies were finally selected for systematic review and 13 of them were included in the quantitative synthesis. All studies adopted human graders as reference standard. The pooled area under the receiver operating characteristic curve (AUROC) was 0.983 (95% confidence interval (CI):0.979-0.987). The pooled sensitivity, specificity, and diagnostic odds ratio (DOR) were 0.88 (95% CI:0.88-0.88), 0.90 (95% CI:0.90-0.91), and 275.27 (95% CI:158.43-478.27), respectively. Threshold analysis was performed and a potential threshold effect was detected among the studies (Spearman correlation coefficient: -0.600, P = 0.030), which was the main cause for the heterogeneity. For studies applying convolutional neural networks in the Age-Related Eye Disease Study database, the pooled AUROC, sensitivity, specificity, and DOR were 0.983 (95% CI:0.978-0.988), 0.88 (95% CI:0.88-0.88), 0.91 (95% CI:0.91-0.91), and 273.14 (95% CI:130.79-570.43), respectively. INTERPRETATION Our data indicated that AI was able to detect AMD in color fundus photographs. The application of AI-based automatic tools is beneficial for the diagnosis of AMD. FUNDING Capital Health Research and Development of Special (2020-1-2052).
Collapse
|
228
|
Bressler NM. JAMA Ophthalmology-The Year in Review, 2020: Bringing Focus to Randomized Clinical Trials. JAMA Ophthalmol 2021; 139:499-500. [PMID: 33764363 DOI: 10.1001/jamaophthalmol.2021.0272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Neil M Bressler
- Editor, JAMA Ophthalmology.,Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
229
|
Rim TH, Lee AY, Ting DS, Teo KYC, Yang HS, Kim H, Lee G, Teo ZL, Teo Wei Jun A, Takahashi K, Yoo TK, Kim SE, Yanagi Y, Cheng CY, Kim SS, Wong TY, Cheung CMG. Computer-aided detection and abnormality score for the outer retinal layer in optical coherence tomography. Br J Ophthalmol 2021; 106:1301-1307. [PMID: 33875452 DOI: 10.1136/bjophthalmol-2020-317817] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 02/20/2021] [Accepted: 03/17/2021] [Indexed: 01/20/2023]
Abstract
BACKGROUND To develop computer-aided detection (CADe) of ORL abnormalities in the retinal pigmented epithelium, interdigitation zone and ellipsoid zone via optical coherence tomography (OCT). METHODS In this retrospective study, healthy participants with normal ORL, and patients with abnormality of ORL including choroidal neovascularisation (CNV) or retinitis pigmentosa (RP) were included. First, an automatic segmentation deep learning (DL) algorithm, CADe, was developed for the three outer retinal layers using 120 handcraft masks of ORL. This automatic segmentation algorithm generated 4000 segmentations, which included 2000 images with normal ORL and 2000 (1000 CNV and 1000 RP) images with focal or wide defects in ORL. Second, based on the automatically generated segmentation images, a binary classifier (normal vs abnormal) was developed. Results were evaluated by area under the receiver operating characteristic curve (AUC). RESULTS The DL algorithm achieved an AUC of 0.984 (95% CI 0.976 to 0.993) for individual image evaluation in the internal test set of 797 images. In addition, performance analysis of a publicly available external test set (n=968) had an AUC of 0.957 (95% CI 0.944 to 0.970) and a second clinical external test set (n=1124) had an AUC of 0.978 (95% CI 0.970 to 0.986). Moreover, the CADe highlighted well normal parts of ORL and omitted highlights in abnormal ORLs of CNV and RP. CONCLUSION The CADe can use OCT images to segment ORL and differentiate between normal ORL and abnormal ORL. The CADe classifier also performs visualisation and may aid future physician diagnosis and clinical applications.
Collapse
Affiliation(s)
- Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Aaron Yuntai Lee
- Department of Ophthalmology, University of Washington School of Medicine, Seattle, Washington, USA
| | - Daniel S Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Kelvin Yi Chong Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Hee Seung Yang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | | | | | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Alvin Teo Wei Jun
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Kengo Takahashi
- Department of Ophthalmology, Asahikawa Medical University, Hokkaido, Japan
| | - Tea Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Seoul, Korea (the Republic of)
| | - Sung Eun Kim
- Department of Ophthalmology, CHA Bundang Medical Center, CHA University, Seongnam, South Korea
| | - Yasuo Yanagi
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Asahikawa Medical University, Hokkaido, Japan
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Sung Soo Kim
- Department of Ophthalmology, Institute of Vision Research, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Chui Ming Gemmy Cheung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore .,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| |
Collapse
|
230
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 295] [Impact Index Per Article: 73.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, UK
| | | | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | | | - Dominic King
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
231
|
Beers A, Brown J, Chang K, Hoebel K, Patel J, Ly KI, Tolaney SM, Brastianos P, Rosen B, Gerstner ER, Kalpathy-Cramer J. DeepNeuro: an open-source deep learning toolbox for neuroimaging. Neuroinformatics 2021; 19:127-140. [PMID: 32578020 DOI: 10.1007/s12021-020-09477-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Translating deep learning research from theory into clinical practice has unique challenges, specifically in the field of neuroimaging. In this paper, we present DeepNeuro, a Python-based deep learning framework that puts deep neural networks for neuroimaging into practical usage with a minimum of friction during implementation. We show how this framework can be used to design deep learning pipelines that can load and preprocess data, design and train various neural network architectures, and evaluate and visualize the results of trained networks on evaluation data. We present a way of reproducibly packaging data pre- and postprocessing functions common in the neuroimaging community, which facilitates consistent performance of networks across variable users, institutions, and scanners. We show how deep learning pipelines created with DeepNeuro can be concisely packaged into shareable Docker and Singularity containers with user-friendly command-line interfaces.
Collapse
Affiliation(s)
- Andrew Beers
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - James Brown
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Katharina Hoebel
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Jay Patel
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - K Ina Ly
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Division of Neuro-Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Sara M Tolaney
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Priscilla Brastianos
- Division of Neuro-Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Bruce Rosen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Elizabeth R Gerstner
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Division of Neuro-Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.
| |
Collapse
|
232
|
Kumar R, Khan FU, Sharma A, Aziz IB, Poddar NK. Recent Applications of Artificial Intelligence in detection of Gastrointestinal, Hepatic and Pancreatic Diseases. Curr Med Chem 2021; 29:66-85. [PMID: 33820515 DOI: 10.2174/0929867328666210405114938] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/25/2021] [Accepted: 03/06/2021] [Indexed: 11/22/2022]
Abstract
There is substantial progress in artificial intelligence (AI) algorithms and their medical sciences applications in the last two decades. AI-assisted programs have already been established for remotely health monitoring using sensors and smartphones. A variety of AI-based prediction models available for the gastrointestinal inflammatory, non-malignant diseases, and bowel bleeding using wireless capsule endoscopy, electronic medical records for hepatitis-associated fibrosis, pancreatic carcinoma using endoscopic ultrasounds. AI-based models may be of immense help for healthcare professionals in the identification, analysis, and decision support using endoscopic images to establish prognosis and risk assessment of patient's treatment using multiple factors. Although enough randomized clinical trials are warranted to establish the efficacy of AI-algorithms assisted and non-AI based treatments before approval of such techniques from medical regulatory authorities. In this article, available AI approaches and AI-based prediction models for detecting gastrointestinal, hepatic, and pancreatic diseases are reviewed. The limitation of AI techniques in such disease prognosis, risk assessment, and decision support are discussed.
Collapse
Affiliation(s)
- Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh. India
| | - Farhat Ullah Khan
- Computer and Information Sciences Department, Universiti Teknologi Petronas, 32610, Seri Iskander, Perak. Malaysia
| | - Anju Sharma
- Department of Applied Science, Indian Institute of Information Technology, Allahabad, Uttar Pradesh. India
| | - Izzatdin Ba Aziz
- Computer and Information Sciences Department, Universiti Teknologi Petronas, 32610, Seri Iskander, Perak. Malaysia
| | | |
Collapse
|
233
|
Zheng C, Bian F, Li L, Xie X, Liu H, Liang J, Chen X, Wang Z, Qiao T, Yang J, Zhang M. Assessment of Generative Adversarial Networks for Synthetic Anterior Segment Optical Coherence Tomography Images in Closed-Angle Detection. Transl Vis Sci Technol 2021; 10:34. [PMID: 34004012 PMCID: PMC8088224 DOI: 10.1167/tvst.10.4.34] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 03/08/2021] [Indexed: 02/05/2023] Open
Abstract
PURPOSE To develop generative adversarial networks (GANs) that synthesize realistic anterior segment optical coherence tomography (AS-OCT) images and evaluate deep learning (DL) models that are trained on real and synthetic datasets for detecting angle closure. METHODS The GAN architecture was adopted and trained on the dataset with AS-OCT images collected from the Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, synthesizing open- and closed-angle AS-OCT images. A visual Turing test with two glaucoma specialists was performed to assess the image quality of real and synthetic images. DL models, trained on either real or synthetic datasets, were developed. Using the clinicians' grading of the AS-OCT images as the reference standard, we compared the diagnostic performance of open-angle vs. closed-angle detection of DL models and the AS-OCT parameter, defined as a trabecular-iris space area 750 µm anterior to the scleral spur (TISA750), in a small independent validation dataset. RESULTS The GAN training included 28,643 AS-OCT anterior chamber angle (ACA) images. The real and synthetic datasets for DL model training have an equal distribution of open- and closed-angle images (all with 10,000 images each). The independent validation dataset included 238 open-angle and 243 closed-angle AS-OCT ACA images. The image quality of real versus synthetic AS-OCT images was similar, as assessed by the two glaucoma specialists, except for the scleral spur visibility. For the independent validation dataset, both DL models achieved higher areas under the curve compared with TISA750. Two DL models had areas under the curve of 0.97 (95% confidence interval, 0.96-0.99) and 0.94 (95% confidence interval, 0.92-0.96). CONCLUSIONS The GAN synthetic AS-OCT images appeared to be of good quality, according to the glaucoma specialists. The DL models, trained on all-synthetic AS-OCT images, can achieve high diagnostic performance. TRANSLATIONAL RELEVANCE The GANs can generate realistic AS-OCT images, which can also be used to train DL models.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Fang Bian
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology, Deyang People's Hospital, Sichuan, China
| | - Luo Li
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Hui Liu
- Aier School of Ophthalmology, Central South University, Changsha, Hunan, China
| | - Jianheng Liang
- Aier School of Ophthalmology, Central South University, Changsha, Hunan, China
| | - Xu Chen
- Aier School of Ophthalmology, Central South University, Changsha, Hunan, China
- Department of Ophthalmology, Shanghai Aier Eye Hospital, Shanghai, China
| | - Zilei Wang
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Tong Qiao
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Jianlong Yang
- Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| |
Collapse
|
234
|
A deep learning framework for the detection of Plus disease in retinal fundus images of preterm infants. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.02.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
235
|
Li B, Chen H, Zhang B, Yuan M, Jin X, Lei B, Xu J, Gu W, Wong DCS, He X, Wang H, Ding D, Li X, Chen Y, Yu W. Development and evaluation of a deep learning model for the detection of multiple fundus diseases based on colour fundus photography. Br J Ophthalmol 2021; 106:1079-1086. [PMID: 33785508 DOI: 10.1136/bjophthalmol-2020-316290] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 01/24/2021] [Accepted: 02/16/2021] [Indexed: 12/24/2022]
Abstract
AIM To explore and evaluate an appropriate deep learning system (DLS) for the detection of 12 major fundus diseases using colour fundus photography. METHODS Diagnostic performance of a DLS was tested on the detection of normal fundus and 12 major fundus diseases including referable diabetic retinopathy, pathologic myopic retinal degeneration, retinal vein occlusion, retinitis pigmentosa, retinal detachment, wet and dry age-related macular degeneration, epiretinal membrane, macula hole, possible glaucomatous optic neuropathy, papilledema and optic nerve atrophy. The DLS was developed with 56 738 images and tested with 8176 images from one internal test set and two external test sets. The comparison with human doctors was also conducted. RESULTS The area under the receiver operating characteristic curves of the DLS on the internal test set and the two external test sets were 0.950 (95% CI 0.942 to 0.957) to 0.996 (95% CI 0.994 to 0.998), 0.931 (95% CI 0.923 to 0.939) to 1.000 (95% CI 0.999 to 1.000) and 0.934 (95% CI 0.929 to 0.938) to 1.000 (95% CI 0.999 to 1.000), with sensitivities of 80.4% (95% CI 79.1% to 81.6%) to 97.3% (95% CI 96.7% to 97.8%), 64.6% (95% CI 63.0% to 66.1%) to 100% (95% CI 100% to 100%) and 68.0% (95% CI 67.1% to 68.9%) to 100% (95% CI 100% to 100%), respectively, and specificities of 89.7% (95% CI 88.8% to 90.7%) to 98.1% (95%CI 97.7% to 98.6%), 78.7% (95% CI 77.4% to 80.0%) to 99.6% (95% CI 99.4% to 99.8%) and 88.1% (95% CI 87.4% to 88.7%) to 98.7% (95% CI 98.5% to 99.0%), respectively. When compared with human doctors, the DLS obtained a higher diagnostic sensitivity but lower specificity. CONCLUSION The proposed DLS is effective in diagnosing normal fundus and 12 major fundus diseases, and thus has much potential for fundus diseases screening in the real world.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Bilei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Mingzhen Yuan
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuemin Jin
- Department of Ophthalmology, Zhengzhou University First Affiliated Hospital, Zhengzhou, Henan, China
| | - Bo Lei
- Clinical Research Center, Henan Eye Institute, Henan Eye Hospital, Clinical Research Center, Henan Provincial People's Hospital, Zhengzhou, Henan, China
| | - Jie Xu
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wei Gu
- Department of Ophthalmology, Beijing Aier Intech Eye Hospital, Beijing, China
| | | | - Xixi He
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Hao Wang
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Dayong Ding
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Xirong Li
- Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China .,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China .,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| |
Collapse
|
236
|
Automated assessment of the substantia nigra on susceptibility map-weighted imaging using deep convolutional neural networks for diagnosis of Idiopathic Parkinson's disease. Parkinsonism Relat Disord 2021; 85:84-90. [PMID: 33761389 DOI: 10.1016/j.parkreldis.2021.03.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 01/27/2021] [Accepted: 03/08/2021] [Indexed: 11/23/2022]
Abstract
OBJECTIVES Despite its use in determining nigrostriatal degeneration, the lack of a consistent interpretation of nigrosome 1 susceptibility map-weighted imaging (SMwI) limits its generalized applicability. To implement and evaluate a diagnostic algorithm based on convolutional neural networks for interpreting nigrosome 1 SMwI for determining nigrostriatal degeneration in idiopathic Parkinson's disease (IPD). METHODS In this retrospective study, we enrolled 267 IPD patients and 160 control subjects (125 patients with drug-induced parkinsonism and 35 healthy subjects) at our institute, and 24 IPD patients and 27 control subjects at three other institutes on approval of the local institutional review boards. Dopamine transporter imaging served as the reference standard for the presence or absence of abnormalities of nigrosome 1 on SMwI. Diagnostic performance was compared between visual assessment by an experienced neuroradiologist and the developed deep learning-based diagnostic algorithm in both internal and external datasets using a bootstrapping method with 10000 re-samples by the "pROC" package of R (version 1.16.2). RESULTS The area under the receiver operating characteristics curve (AUC) (95% confidence interval [CI]) per participant by the bootstrap method was not significantly different between visual assessment and the deep learning-based algorithm (internal validation, .9622 [0.8912-1.0000] versus 0.9534 [0.8779-0.9956], P = .1511; external validation, 0.9367 [0.8843-0.9802] versus 0.9208 [0.8634-0.9693], P = .6267), indicative of a comparable performance to visual assessment. CONCLUSIONS Our deep learning-based algorithm for assessing abnormalities of nigrosome 1 on SMwI was found to have a comparable performance to that of an experienced neuroradiologist.
Collapse
|
237
|
Alam M, Hallak JA. AI-automated referral for patients with visual impairment. Lancet Digit Health 2021; 3:e2-e3. [PMID: 33735064 DOI: 10.1016/s2589-7500(20)30286-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 11/19/2020] [Indexed: 10/22/2022]
Affiliation(s)
- Minhaj Alam
- Department of Ophthalmology and Visual Sciences at the University of Illinois at Chicago, Chicago, IL 60612, USA; Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Joelle A Hallak
- Department of Ophthalmology and Visual Sciences at the University of Illinois at Chicago, Chicago, IL 60612, USA.
| |
Collapse
|
238
|
Benet D, Pellicer-Valero OJ. Artificial Intelligence: the unstoppable revolution in ophthalmology. Surv Ophthalmol 2021; 67:252-270. [PMID: 33741420 DOI: 10.1016/j.survophthal.2021.03.003] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 01/31/2021] [Accepted: 03/08/2021] [Indexed: 12/18/2022]
Abstract
Artificial Intelligence (AI) is an unstoppable force that is starting to permeate all aspects of our society as part of the revolution being brought into our lives (and into medicine) by the digital era, and accelerated by the current COVID-19 pandemic. As the population ages and developing countries move forward, AI-based systems may be a key asset in streamlining the screening, staging, and treatment planning of sight-threatening eye conditions, offloading the most tedious tasks from the experts, allowing for a greater population coverage, and bringing the best possible care to every patient. This paper presents a review of the state of the art of AI in the field of ophthalmology, focusing on the strengths and weaknesses of current systems, and defining the vision that will enable us to advance scientifically in this digital era. It starts with a thorough yet accessible introduction to the algorithms underlying all modern AI applications. Then, a critical review of the main AI applications in ophthalmology is presented, including Diabetic Retinopathy, Age-Related Macular Degeneration, Retinopathy of Prematurity, Glaucoma, and other AI-related topics such as image enhancement. The review finishes with a brief discussion on the opportunities and challenges that the future of this field might hold.
Collapse
Affiliation(s)
| | - Oscar J Pellicer-Valero
- Intelligent Data Analysis Laboratory, Department of Electronic Engineering, ETSE (Engineering School), Universitat de València (UV), Valencia, Spain
| |
Collapse
|
239
|
Key factors in a rigorous longitudinal image-based assessment of retinopathy of prematurity. Sci Rep 2021; 11:5369. [PMID: 33686091 PMCID: PMC7940603 DOI: 10.1038/s41598-021-84723-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Accepted: 02/15/2021] [Indexed: 12/18/2022] Open
Abstract
To describe a database of longitudinally graded telemedicine retinal images to be used as a comparator for future studies assessing grader recall bias and ability to detect typical progression (e.g. International Classification of Retinopathy of Prematurity (ICROP) stages) as well as incremental changes in retinopathy of prematurity (ROP). Cohort comprised of retinal images from 84 eyes of 42 patients who were sequentially screened for ROP over 6 consecutive weeks in a telemedicine program and then followed to vascular maturation or treatment, and then disease stabilization. De-identified retinal images across the 6 weekly exams (2520 total images) were graded by an ROP expert based on whether ROP had improved, worsened, or stayed the same compared to the prior week’s images, corresponding to an overall clinical “gestalt” score. Subsequently, we examined which parameters might have influenced the examiner’s ability to detect longitudinal change; images were graded by the same ROP expert by image view (central, inferior, nasal, superior, temporal) and by retinal components (vascular tortuosity, vascular dilation, stage, hemorrhage, vessel growth), again determining if each particular retinal component or ROP in each image view had improved, worsened, or stayed the same compared to the prior week’s images. Agreement between gestalt scores and view, component, and component by view scores was assessed using percent agreement, absolute agreement, and Cohen’s weighted kappa statistic to determine if any of the hypothesized image features correlated with the ability to predict ROP disease trajectory in patients. The central view showed substantial agreement with gestalt scores (κ = 0.63), with moderate agreement in the remaining views. Of retinal components, vascular tortuosity showed the most overall agreement with gestalt (κ = 0.42–0.61), with only slight to fair agreement for all other components. This is a well-defined ROP database graded by one expert in a real-world setting in a masked fashion that correlated with the actual (remote in time) exams and known outcomes. This provides a foundation for subsequent study of telemedicine’s ability to longitudinally assess ROP disease trajectory, as well as for potential artificial intelligence approaches to retinal image grading, in order to expand patient access to timely, accurate ROP screening.
Collapse
|
240
|
Campbell JP, Singh P, Redd TK, Brown JM, Shah PK, Subramanian P, Rajan R, Valikodath N, Cole E, Ostmo S, Chan RVP, Venkatapathy N, Chiang MF, Kalpathy-Cramer J. Applications of Artificial Intelligence for Retinopathy of Prematurity Screening. Pediatrics 2021; 147:e2020016618. [PMID: 33637645 PMCID: PMC7924138 DOI: 10.1542/peds.2020-016618] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/29/2020] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVES Childhood blindness from retinopathy of prematurity (ROP) is increasing as a result of improvements in neonatal care worldwide. We evaluate the effectiveness of artificial intelligence (AI)-based screening in an Indian ROP telemedicine program and whether differences in ROP severity between neonatal care units (NCUs) identified by using AI are related to differences in oxygen-titrating capability. METHODS External validation study of an existing AI-based quantitative severity scale for ROP on a data set of images from the Retinopathy of Prematurity Eradication Save Our Sight ROP telemedicine program in India. All images were assigned an ROP severity score (1-9) by using the Imaging and Informatics in Retinopathy of Prematurity Deep Learning system. We calculated the area under the receiver operating characteristic curve and sensitivity and specificity for treatment-requiring retinopathy of prematurity. Using multivariable linear regression, we evaluated the mean and median ROP severity in each NCU as a function of mean birth weight, gestational age, and the presence of oxygen blenders and pulse oxygenation monitors. RESULTS The area under the receiver operating characteristic curve for detection of treatment-requiring retinopathy of prematurity was 0.98, with 100% sensitivity and 78% specificity. We found higher median (interquartile range) ROP severity in NCUs without oxygen blenders and pulse oxygenation monitors, most apparent in bigger infants (>1500 g and 31 weeks' gestation: 2.7 [2.5-3.0] vs 3.1 [2.4-3.8]; P = .007, with adjustment for birth weight and gestational age). CONCLUSIONS Integration of AI into ROP screening programs may lead to improved access to care for secondary prevention of ROP and may facilitate assessment of disease epidemiology and NCU resources.
Collapse
Affiliation(s)
- J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute and
- Contributed equally as co-first authors
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging and Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
- Contributed equally as co-first authors
| | - Travis K Redd
- Department of Ophthalmology, Casey Eye Institute and
| | - James M Brown
- Department of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Parag K Shah
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | - Prema Subramanian
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | - Renu Rajan
- Department of Retina and Vitreous, Aravind Eye Hospital, Madurai, India; and
| | - Nita Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary and University of Illinois at Chicago, Chicago, Illinois
| | - Emily Cole
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute and
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary and University of Illinois at Chicago, Chicago, Illinois
| | | | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute and
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging and Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
| |
Collapse
|
241
|
Bao Y, Ming WK, Mou ZW, Kong QH, Li A, Yuan TF, Mi XS. Current Application of Digital Diagnosing Systems for Retinopathy of Prematurity. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105871. [PMID: 33309305 DOI: 10.1016/j.cmpb.2020.105871] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 11/18/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinopathy of prematurity (ROP), a proliferative vascular eye disease, is one of the leading causes of blindness in childhood and prevails in premature infants with low-birth-weight. The recent progress in digital image analysis offers novel strategies for ROP diagnosis. This paper provides a comprehensive review on the development of digital diagnosing systems for ROP to software researchers. It may also be adopted as a guide to ophthalmologists for selecting the most suitable diagnostic software in the clinical setting, particularly for the remote ophthalmic support. METHODS We review the latest literatures concerning the application of digital diagnosing systems for ROP. The diagnosing systems are analyzed and categorized. Articles published between 1998 and 2020 were screened with the two searching engines Pubmed and Google Scholar. RESULTS Telemedicine is a method of remote image interpretation that can provide medical service to remote regions, and yet requires training to local operators. On the basis of image collection in telemedicine, computer-based image analytical systems for ROP were later developed. So far, the aforementioned systems have been mainly developed by virtue of classic machine learning, deep learning (DL) and multiple machine learning. During the past two decades, various computer-aided systems for ROP based on classic machine learning (e.g. RISA, ROPtool, CAIER) became available and have achieved satisfactory performance. Further, automated systems for ROP diagnosis based on DL are developed for clinical applications and exhibit high accuracy. Moreover, multiple instance learning is another method to establish an automated system for ROP detection besides DL, which, however, warrants further investigation in future. CONCLUSION At present, the incorporation of computer-based image analysis with telemedicine potentially enables the detection, supervision and in-time treatment of ROP for the preterm babies.
Collapse
Affiliation(s)
- Yuekun Bao
- Department of Ophthalmology, the First Affiliated Hospital of Jinan University, Guangzhou, China; State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Wai-Kit Ming
- Clinical Medicine, International School, Jinan University, Guangzhou, China
| | - Zhi-Wei Mou
- Department of Rehabilitation, the First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Qi-Hang Kong
- Department of Ophthalmology, the First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Ang Li
- Guangdong - Hong Kong - Macau Institute of CNS Regeneration, Joint International Research Laboratory of CNS Regeneration Ministry of Education, Jinan University, Guangzhou, China; Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory), Guangzhou, China.
| | - Ti-Fei Yuan
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai, China.
| | - Xue-Song Mi
- Department of Ophthalmology, the First Affiliated Hospital of Jinan University, Guangzhou, China; Changsha Academician Expert Workstation, Aier Eye Hospital Group, Changsha, China.
| |
Collapse
|
242
|
Panda BB, Thakur S, Mohapatra S, Parida S. Artificial intelligence in ophthalmology: A new era is beginning. Artif Intell Med Imaging 2021; 2:5-12. [DOI: 10.35711/aimi.v2.i1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 12/31/2020] [Accepted: 02/12/2021] [Indexed: 02/06/2023] Open
Affiliation(s)
- Bijnya Birajita Panda
- Department ofOphthalmology, S.C.B Medical College and Hospital, Cuttack 753007, Odisha, India
| | - Subhodeep Thakur
- Department ofOphthalmology, S.C.B Medical College and Hospital, Cuttack 753007, Odisha, India
| | - Sumita Mohapatra
- Department ofOphthalmology, S.C.B Medical College and Hospital, Cuttack 753007, Odisha, India
| | - Subhabrata Parida
- Department ofOphthalmology, S.C.B Medical College and Hospital, Cuttack 753007, Odisha, India
| |
Collapse
|
243
|
Oke I, VanderVeen D. Machine Learning Applications in Pediatric Ophthalmology. Semin Ophthalmol 2021; 36:210-217. [PMID: 33641598 DOI: 10.1080/08820538.2021.1890151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Purpose: To describe emerging applications of machine learning (ML) in pediatric ophthalmology with an emphasis on the diagnosis and treatment of disorders affecting visual development. Methods: Literature review of studies applying ML algorithms to problems in pediatric ophthalmology. Results: At present, the ML literature emphasizes applications in retinopathy of prematurity. However, there are increasing efforts to apply ML techniques in the diagnosis of amblyogenic conditions such as pediatric cataracts, strabismus, and high refractive error. Conclusions: A greater understanding of the principles governing ML will enable pediatric eye care providers to apply the methodology to unexplored challenges within the subspecialty.
Collapse
Affiliation(s)
- Isdin Oke
- Department of Ophthalmology, Boston Children's Hospital, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Deborah VanderVeen
- Department of Ophthalmology, Boston Children's Hospital, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
244
|
Gong D, Kras A, Miller JB. Application of Deep Learning for Diagnosing, Classifying, and Treating Age-Related Macular Degeneration. Semin Ophthalmol 2021; 36:198-204. [PMID: 33617390 DOI: 10.1080/08820538.2021.1889617] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Age-related macular degeneration (AMD) affects nearly 200 million people and is the third leading cause of irreversible vision loss worldwide. Deep learning, a branch of artificial intelligence that can learn image recognition based on pre-existing datasets, creates an opportunity for more accurate and efficient diagnosis, classification, and treatment of AMD on both individual and population levels. Current algorithms based on fundus photography and optical coherence tomography imaging have already achieved diagnostic accuracy levels comparable to human graders. This accuracy can be further increased when deep learning algorithms are simultaneously applied to multiple diagnostic imaging modalities. Combined with advances in telemedicine and imaging technology, deep learning can enable large populations of patients to be screened than would otherwise be possible and allow ophthalmologists to focus on seeing those patients who are in need of treatment, thus reducing the number of patients with significant visual impairment from AMD.
Collapse
Affiliation(s)
- Dan Gong
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA
| | - Ashley Kras
- Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| | - John B Miller
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA.,Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| |
Collapse
|
245
|
Ali D, Frimpong S. DeepHaul: a deep learning and reinforcement learning-based smart automation framework for dump trucks. PROGRESS IN ARTIFICIAL INTELLIGENCE 2021. [DOI: 10.1007/s13748-021-00233-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
246
|
Xi IL, Wu J, Guan J, Zhang PJ, Horii SC, Soulen MC, Zhang Z, Bai HX. Deep learning for differentiation of benign and malignant solid liver lesions on ultrasonography. Abdom Radiol (NY) 2021; 46:534-543. [PMID: 32681268 DOI: 10.1007/s00261-020-02564-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
PURPOSE The ability to reliably distinguish benign from malignant solid liver lesions on ultrasonography can increase access, decrease costs, and help to better triage patients for biopsy. In this study, we used deep learning to differentiate benign from malignant focal solid liver lesions based on their ultrasound appearance. METHODS Among the 596 patients who met the inclusion criteria, there were 911 images of individual liver lesions, of which 535 were malignant and 376 were benign. Our training set contained 660 lesions augmented dynamically during training for a total of 330,000 images; our test set contained 79 images. A neural network with ResNet50 architecture was fine-tuned using pre-trained weights on ImageNet. Non-cystic liver lesions with definite diagnosis by histopathology or MRI were included. Accuracy of the final model was compared with expert interpretation. Two separate datasets were used in training and evaluation, one with all lesions and one with lesions deemed to be of uncertain diagnosis based on the Code Abdomen rating system. RESULTS Our model trained on the complete set of all lesions achieved a test accuracy of 0.84 (95% CI 0.74-0.90) compared to expert 1 with a test accuracy of 0.80 (95% CI 0.70-0.87) and expert 2 with a test accuracy of 0.73 (95% CI 0.63-0.82). Our model trained on the uncertain set of lesions achieved a test accuracy of 0.79 (95% CI 0.69-0.87) compared to expert 1 with a test accuracy of 0.70 (95% CI 0.59-0.78) and expert 2 with a test accuracy of 0.66 (95% CI 0.55-0.75). On the uncertain dataset, compared to all experts averaged, the model had higher test accuracy (0.79 vs. 0.68, p = 0.025). CONCLUSION Deep learning algorithms proposed in the current study improve differentiation of benign from malignant ultrasound-captured solid liver lesions and perform comparably to expert radiologists. Deep learning tools can potentially be used to improve the accuracy and efficiency of clinical workflows.
Collapse
Affiliation(s)
- Ianto Lin Xi
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Jing Wu
- Department of Radiology, The Second Xiangya Hospital, Central South University, No. 139 Middle Renmin Road, Changsha, 410011, Hunan, China
| | - Jing Guan
- Department of Radiology, The Second Xiangya Hospital, Central South University, No. 139 Middle Renmin Road, Changsha, 410011, Hunan, China
| | - Paul J Zhang
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Steven C Horii
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Michael C Soulen
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Zishu Zhang
- Department of Radiology, The Second Xiangya Hospital, Central South University, No. 139 Middle Renmin Road, Changsha, 410011, Hunan, China.
| | - Harrison X Bai
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, 02903, USA.
| |
Collapse
|
247
|
Abstract
Digital retinal imaging is at the core of a revolution that is continually improving the screening, diagnosis, documentation, monitoring, and treatment of infant retinal diseases. Historically, imaging the retina of infants had been limited and difficult to obtain. Recent advances in photographic instrumentation have significantly improved the ability to obtain high quality multimodal images of the infant retina. These include color fundus photography with different camera angles, ultrasonography, fundus fluorescein angiography, optical coherence tomography, and optical coherence tomography angiography. We provide a summary of the current literature on retinal imaging in infants and highlight areas where further research is required.
Collapse
|
248
|
Coyner AS, Chen J, Campbell JP, Ostmo S, Singh P, Kalpathy-Cramer J, Chiang MF. Diagnosability of Synthetic Retinal Fundus Images for Plus Disease Detection in Retinopathy of Prematurity. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2021; 2020:329-337. [PMID: 33936405 PMCID: PMC8075515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Advances in generative adversarial networks have allowed for engineering of highly-realistic images. Many studies have applied these techniques to medical images. However, evaluation of generated medical images often relies upon image quality and reconstruction metrics, and subjective evaluation by laypersons. This is acceptable for generation of images depicting everyday objects, but not for medical images, where there may be subtle features experts rely upon for diagnosis. We implemented the pix2pix generative adversarial network for retinal fundus image generation, and evaluated the ability of experts to identify generated images as such and to form accurate diagnoses of plus disease in retinopathy of prematurity. We found that, while experts could discern between real and generated images, the diagnoses between image sets were similar. By directly evaluating and confirming physicians' abilities to diagnose generated retinal fundus images, this work supports conclusions that generated images may be viable for dataset augmentation and physician training.
Collapse
Affiliation(s)
| | - Jimmy Chen
- Ophthalmology Oregon Health & Science University, Portland, OR, United States
| | - J Peter Campbell
- Ophthalmology Oregon Health & Science University, Portland, OR, United States
| | - Susan Ostmo
- Ophthalmology Oregon Health & Science University, Portland, OR, United States
| | - Praveer Singh
- Radiology, MGH/Harvard Medical School, Charlestown, MA, United States
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Jayashree Kalpathy-Cramer
- Radiology, MGH/Harvard Medical School, Charlestown, MA, United States
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Michael F Chiang
- Medical Informatics & Clinical Epidemiology
- Ophthalmology Oregon Health & Science University, Portland, OR, United States
| |
Collapse
|
249
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
250
|
Arima M, Fujii Y, Sonoda KH. Translational Research in Retinopathy of Prematurity: From Bedside to Bench and Back Again. J Clin Med 2021; 10:331. [PMID: 33477419 PMCID: PMC7830975 DOI: 10.3390/jcm10020331] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/09/2021] [Accepted: 01/15/2021] [Indexed: 12/11/2022] Open
Abstract
Retinopathy of prematurity (ROP), a vascular proliferative disease affecting preterm infants, is a leading cause of childhood blindness. Various studies have investigated the pathogenesis of ROP. Clinical experience indicates that oxygen levels are strongly correlated with ROP development, which led to the development of oxygen-induced retinopathy (OIR) as an animal model of ROP. OIR has been used extensively to investigate the molecular mechanisms underlying ROP and to evaluate the efficacy of new drug candidates. Large clinical trials have demonstrated the efficacy of anti-vascular endothelial growth factor (VEGF) agents to treat ROP, and anti-VEGF therapy is presently becoming the first-line treatment worldwide. Anti-VEGF therapy has advantages over conventional treatments, including being minimally invasive with a low risk of refractive error. However, long-term safety concerns and the risk of late recurrence limit this treatment. There is an unmet medical need for novel ROP therapies, which need to be addressed by safe and minimally invasive therapies. The recent progress in biotechnology has contributed greatly to translational research. In this review, we outline how basic ROP research has evolved with clinical experience and the subsequent emergence of new drugs. We discuss previous and ongoing trials and present the candidate molecules expected to become novel targets.
Collapse
Affiliation(s)
- Mitsuru Arima
- Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan; (Y.F.); (K.-H.S.)
- Center for Clinical and Translational Research, Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka 8128582, Japan
| | - Yuya Fujii
- Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan; (Y.F.); (K.-H.S.)
| | - Koh-Hei Sonoda
- Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, Fukuoka 8128582, Japan; (Y.F.); (K.-H.S.)
| |
Collapse
|