251
|
Lin C, Song X, Li L, Li Y, Jiang M, Sun R, Zhou H, Fan X. Detection of active and inactive phases of thyroid-associated ophthalmopathy using deep convolutional neural network. BMC Ophthalmol 2021; 21:39. [PMID: 33446163 PMCID: PMC7807896 DOI: 10.1186/s12886-020-01783-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Accepted: 12/21/2020] [Indexed: 02/07/2023] Open
Abstract
Background This study aimed to establish a deep learning system for detecting the active and inactive phases of thyroid-associated ophthalmopathy (TAO) using magnetic resonance imaging (MRI). This system could provide faster, more accurate, and more objective assessments across populations. Methods A total of 160 MRI images of patients with TAO, who visited the Ophthalmology Clinic of the Ninth People’s Hospital, were retrospectively obtained for this study. Of these, 80% were used for training and validation, and 20% were used for testing. The deep learning system, based on deep convolutional neural network, was established to distinguish patients with active phase from those with inactive phase. The accuracy, precision, sensitivity, specificity, F1 score and area under the receiver operating characteristic curve were analyzed. Besides, visualization method was applied to explain the operation of the networks. Results Network A inherited from Visual Geometry Group network. The accuracy, specificity and sensitivity were 0.863±0.055, 0.896±0.042 and 0.750±0.136 respectively. Due to the recurring phenomenon of vanishing gradient during the training process of network A, we added parts of Residual Neural Network to build network B. After modification, network B improved the sensitivity (0.821±0.021) while maintaining a good accuracy (0.855±0.018) and a good specificity (0.865±0.021). Conclusions The deep convolutional neural network could automatically detect the activity of TAO from MRI images with strong robustness, less subjective judgment, and less measurement error. This system could standardize the diagnostic process and speed up the treatment decision making for TAO.
Collapse
Affiliation(s)
- Chenyi Lin
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Xuefei Song
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Lunhao Li
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Yinwei Li
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Mengda Jiang
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Rou Sun
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Huifang Zhou
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, China. .,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China.
| | - Xianqun Fan
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, China. .,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China.
| |
Collapse
|
252
|
Esteva A, Chou K, Yeung S, Naik N, Madani A, Mottaghi A, Liu Y, Topol E, Dean J, Socher R. Deep learning-enabled medical computer vision. NPJ Digit Med 2021; 4:5. [PMID: 33420381 PMCID: PMC7794558 DOI: 10.1038/s41746-020-00376-2] [Citation(s) in RCA: 296] [Impact Index Per Article: 74.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 12/01/2020] [Indexed: 02/07/2023] Open
Abstract
A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.
Collapse
Affiliation(s)
| | | | | | - Nikhil Naik
- Salesforce AI Research, San Francisco, CA, USA
| | - Ali Madani
- Salesforce AI Research, San Francisco, CA, USA
| | | | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Eric Topol
- Scripps Research Translational Institute, La Jolla, CA, USA
| | - Jeff Dean
- Google Research, Mountain View, CA, USA
| | | |
Collapse
|
253
|
Zheng C, Yao Q, Lu J, Xie X, Lin S, Wang Z, Wang S, Fan Z, Qiao T. Detection of Referable Horizontal Strabismus in Children's Primary Gaze Photographs Using Deep Learning. Transl Vis Sci Technol 2021; 10:33. [PMID: 33532144 PMCID: PMC7846951 DOI: 10.1167/tvst.10.1.33] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Accepted: 12/09/2020] [Indexed: 02/05/2023] Open
Abstract
PURPOSE This study implements and demonstrates a deep learning (DL) approach for screening referable horizontal strabismus based on primary gaze photographs using clinical assessments as a reference. The purpose of this study was to develop and evaluate deep learning algorithms that screen referable horizontal strabismus in children's primary gaze photographs. METHODS DL algorithms were developed and trained using primary gaze photographs from two tertiary hospitals of children with primary horizontal strabismus who underwent surgery as well as orthotropic children who underwent routine refractive tests. A total of 7026 images (3829 non-strabismus from 3021 orthoptics [healthy] subjects and 3197 strabismus images from 2772 subjects) were used to develop the DL algorithms. The DL model was evaluated by 5-fold cross-validation and tested on an independent validation data set of 277 images. The diagnostic performance of the DL algorithm was assessed by calculating the accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). RESULTS Using 5-fold cross-validation during training, the average AUCs of the DL models were approximately 0.99. In the external validation data set, the DL algorithm achieved an AUC of 0.99 with a sensitivity of 94.0% and a specificity of 99.3%. The DL algorithm's performance (with an accuracy of 0.95) in diagnosing referable horizontal strabismus was better than that of the resident ophthalmologists (with accuracy ranging from 0.81 to 0.85). CONCLUSIONS We developed and evaluated a DL model to automatically identify referable horizontal strabismus using primary gaze photographs. The diagnostic performance of the DL model is comparable to or better than that of ophthalmologists. TRANSLATIONAL RELEVANCE DL methods that automate the detection of referable horizontal strabismus can facilitate clinical assessment and screening for children at risk of strabismus.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Qian Yao
- Department of Ophthalmology, Zhaoqing Gaoyao People's Hospital, Zhaoqing, Guangdong, China
| | - Jiewei Lu
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Shibin Lin
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Zilei Wang
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Siyin Wang
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Zhun Fan
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Tong Qiao
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiaotong University, Shanghai, China
| |
Collapse
|
254
|
Artificial Intelligence in Pediatrics. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_316-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
255
|
Pandey B, Kumar Pandey D, Pratap Mishra B, Rhmann W. A comprehensive survey of deep learning in the field of medical imaging and medical natural language processing: Challenges and research directions. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
256
|
Alrassi J, Katsufrakis PJ, Chandran L. Technology Can Augment, but Not Replace, Critical Human Skills Needed for Patient Care. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:37-43. [PMID: 32910005 DOI: 10.1097/acm.0000000000003733] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The practice of medicine is changing rapidly as a consequence of electronic health record adoption, new technologies for patient care, disruptive innovations that breakdown professional hierarchies, and evolving societal norms. Collectively, these have resulted in the modification of the physician's role as the gatekeeper for health care, increased shift-based care, and amplified interprofessional team-based care. Technological innovations present opportunities as well as challenges. Artificial intelligence, which has great potential, has already transformed some tasks, particularly those involving image interpretation. Ubiquitous access to information via the Internet by physicians and patients alike presents benefits as well as drawbacks: patients and providers have ready access to virtually all of human knowledge, but some websites are contaminated with misinformation and many people have difficulty differentiating between solid, evidence-based data and untruths. The role of the future physician will shift as complexity in health care increases and as artificial intelligence and other technologies advance. These technological advances demand new skills of physicians; memory and knowledge accumulation will diminish in importance while information management skills will become more important. In parallel, medical educators must enhance their teaching and assessment of critical human skills (e.g., clear communication, empathy) in the delivery of patient care. The authors emphasize the enduring role of critical human skills in safe and effective patient care even as medical practice is increasingly guided by artificial intelligence and related technology, and they suggest new and longitudinal ways of assessing essential noncognitive skills to meet the demands of the future. The authors envision practical and achievable benefits accruing to patients and providers if practitioners leverage technological advancements to facilitate the development of their critical human skills.
Collapse
Affiliation(s)
- James Alrassi
- J. Alrassi is resident physician, Department of Otolaryngology-Head and Neck Surgery, State University of New York Downstate Health Sciences University, Brooklyn, New York; ORCID: https://orcid.org/0000-0003-4851-1697
| | - Peter J Katsufrakis
- P.J. Katsufrakis is president and chief executive officer, National Board of Medical Examiners, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-9077-9190
| | - Latha Chandran
- L. Chandran is executive dean and founding chair, Department of Medical Education, University of Miami Miller School of Medicine, Miami, Florida; ORCID: https://orcid.org/0000-0002-7538-4331
| |
Collapse
|
257
|
Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_200-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
258
|
Toliušis R, Kurasova O, Bernatavičienė J. Semantic Segmentation of Eye Fundus Images Using Convolutional Neural Networks. INFORMACIJOS MOKSLAI 2020. [DOI: 10.15388/im.2020.90.53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The article reviews the problems of eye bottom fundus analysis and semantic segmentation algorithms used to distinguish eye vessels, optical disk. Various diseases, such as glaucoma, hypertension, diabetic retinopathy, macular degeneration, etc., can be diagnosed by changes and anomalies of vesssels and optical disk. For semantic segmentation convolutional neural networks, especially U-Net architecture, are well suited. Recently a number of U-Net modifications have been developed that deliver excellent performance results.
Collapse
|
259
|
Convolutional Neural Networks with Transfer Learning for Recognition of COVID-19: A Comparative Study of Different Approaches. AI 2020. [DOI: 10.3390/ai1040034] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
To judge the ability of convolutional neural networks (CNNs) to effectively and efficiently transfer image representations learned on the ImageNet dataset to the task of recognizing COVID-19 in this work, we propose and analyze four approaches. For this purpose, we use VGG16, ResNetV2, InceptionResNetV2, DenseNet121, and MobileNetV2 CNN models pre-trained on ImageNet dataset to extract features from X-ray images of COVID and Non-COVID patients. Simulations study performed by us reveal that these pre-trained models have a different level of ability to transfer image representation. We find that in the approaches that we have proposed, if we use either ResNetV2 or DenseNet121 to extract features, then the performance of these approaches to detect COVID-19 is better. One of the important findings of our study is that the use of principal component analysis for feature selection improves efficiency. The approach using the fusion of features outperforms all the other approaches, and with this approach, we could achieve an accuracy of 0.94 for a three-class classification problem. This work will not only be useful for COVID-19 detection but also for any domain with small datasets.
Collapse
|
260
|
Vilela MA, Amaral CE, Ferreira MAT. Retinal vascular tortuosity: Mechanisms and measurements. Eur J Ophthalmol 2020; 31:1497-1506. [PMID: 33307777 DOI: 10.1177/1120672120979907] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Retinal vessel tortuosity has been used in the diagnosis and management of different clinical situations. Notwithstanding, basic concepts, standards and tools of measurement, reliable normative data and clinical applications have many gaps or points of divergence. In this review we discuss triggering causes of retinal vessel tortuosity and resources used to assess and quantify it, as well as current limitations.
Collapse
Affiliation(s)
- Manuel Ap Vilela
- Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil
| | - Carlos Ev Amaral
- Federal University of Health Sciences of Porto Alegre, Porto Alegre, Brazil
| | | |
Collapse
|
261
|
Ludwig CA, Perera C, Myung D, Greven MA, Smith SJ, Chang RT, Leng T. Automatic Identification of Referral-Warranted Diabetic Retinopathy Using Deep Learning on Mobile Phone Images. Transl Vis Sci Technol 2020; 9:60. [PMID: 33294301 PMCID: PMC7718806 DOI: 10.1167/tvst.9.2.60] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Accepted: 10/22/2020] [Indexed: 12/27/2022] Open
Abstract
Purpose To evaluate the performance of a deep learning algorithm in the detection of referral-warranted diabetic retinopathy (RDR) on low-resolution fundus images acquired with a smartphone and indirect ophthalmoscope lens adapter. Methods An automated deep learning algorithm trained on 92,364 traditional fundus camera images was tested on a dataset of smartphone fundus images from 103 eyes acquired from two previously published studies. Images were extracted from live video screenshots from fundus examinations using a commercially available lens adapter and exported as a screenshot from live video clips filmed at 1080p resolution. Each image was graded twice by a board-certified ophthalmologist and compared to the output of the algorithm, which classified each image as having RDR (moderate nonproliferative DR or worse) or no RDR. Results In spite of the presence of multiple artifacts (lens glare, lens particulates/smudging, user hands over the objective lens) and low-resolution images achieved by users of various levels of medical training, the algorithm achieved a 0.89 (95% confidence interval [CI] 0.83-0.95) area under the curve with an 89% sensitivity (95% CI 81%-100%) and 83% specificity (95% CI 77%-89%) for detecting RDR on mobile phone acquired fundus photos. Conclusions The fully data-driven artificial intelligence-based grading algorithm herein can be used to screen fundus photos taken from mobile devices and identify with high reliability which cases should be referred to an ophthalmologist for further evaluation and treatment. Translational Relevance The implementation of this algorithm on a global basis could drastically reduce the rate of vision loss attributed to DR.
Collapse
Affiliation(s)
- Cassie A. Ludwig
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA, USA
- Retina Service, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA
| | - Chandrashan Perera
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA, USA
| | - David Myung
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA, USA
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
- VA Palo Alto Health Care System, Palo Alto, CA, USA
| | - Margaret A. Greven
- Department of Ophthalmology, Wake Forest Baptist Health, Winston Salem, NC, USA
| | - Stephen J. Smith
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA, USA
- VA Palo Alto Health Care System, Palo Alto, CA, USA
| | - Robert T. Chang
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Theodore Leng
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA, USA
| |
Collapse
|
262
|
Chang K, Beers AL, Brink L, Patel JB, Singh P, Arun NT, Hoebel KV, Gaw N, Shah M, Pisano ED, Tilkin M, Coombs LP, Dreyer KJ, Allen B, Agarwal S, Kalpathy-Cramer J. Multi-Institutional Assessment and Crowdsourcing Evaluation of Deep Learning for Automated Classification of Breast Density. J Am Coll Radiol 2020; 17:1653-1662. [PMID: 32592660 PMCID: PMC10757768 DOI: 10.1016/j.jacr.2020.05.015] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 05/05/2020] [Accepted: 05/07/2020] [Indexed: 12/11/2022]
Abstract
OBJECTIVE We developed deep learning algorithms to automatically assess BI-RADS breast density. METHODS Using a large multi-institution patient cohort of 108,230 digital screening mammograms from the Digital Mammographic Imaging Screening Trial, we investigated the effect of data, model, and training parameters on overall model performance and provided crowdsourcing evaluation from the attendees of the ACR 2019 Annual Meeting. RESULTS Our best-performing algorithm achieved good agreement with radiologists who were qualified interpreters of mammograms, with a four-class κ of 0.667. When training was performed with randomly sampled images from the data set versus sampling equal number of images from each density category, the model predictions were biased away from the low-prevalence categories such as extremely dense breasts. The net result was an increase in sensitivity and a decrease in specificity for predicting dense breasts for equal class compared with random sampling. We also found that the performance of the model degrades when we evaluate on digital mammography data formats that differ from the one that we trained on, emphasizing the importance of multi-institutional training sets. Lastly, we showed that crowdsourced annotations, including those from attendees who routinely read mammograms, had higher agreement with our algorithm than with the original interpreting radiologists. CONCLUSION We demonstrated the possible parameters that can influence the performance of the model and how crowdsourcing can be used for evaluation. This study was performed in tandem with the development of the ACR AI-LAB, a platform for democratizing artificial intelligence.
Collapse
Affiliation(s)
- Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Andrew L Beers
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Laura Brink
- American College of Radiology, Reston, Virginia
| | - Jay B Patel
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Nishanth T Arun
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Katharina V Hoebel
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Nathan Gaw
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Meesam Shah
- American College of Radiology, Reston, Virginia
| | - Etta D Pisano
- Chief Research Officer (ACR), Reston, Virginia; Professor in Residence, Beth Israel Lahey/Harvard Medical School, Boston, Massachusetts
| | - Mike Tilkin
- Chief Information Officer and EVP for Technology (ACR), Reston, Virginia
| | | | - Keith J Dreyer
- Chief Data Science Officer, Chief Imaging Information Officer, Massachussetts General Hospital and Brigham Women's Hospital (MGH & BWH), Chief Executive, MGH & BWH Center for Clinical Data Science; Vice Chairman of Radiology - Informatics, MGH & BWH, Boston, Massachusetts; Associate Professor of Radiology,Harvard Medical School, Boston, Massachusetts; Chief Science Officer, ACR Data Science Institute, Reston, Virginia
| | - Bibb Allen
- Chief Medical Office, ACR Data Science Institute, Reston, Virginia; Secretary General, International Society of Radiology, Reston, Virginia; Partner, Grandview Medical Center, Birmingham, Alabama
| | | | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; Scientific Director (CCDS), Director (QTIM lab and the Center for Machine Learning), Associate Professor of Radiology, MGH/Harvard Medical School, Boston, Massachusetts.
| |
Collapse
|
263
|
Chandra A, Romano MR, Ting DS, Chao DL. Implementing the new normal in ophthalmology care beyond COVID-19. Eur J Ophthalmol 2020; 31:321-327. [PMID: 33225734 DOI: 10.1177/1120672120975331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The COVID-19 pandemic has altered the clinical landscape immeasurably. The need to physical distance requires rethinking how we deliver ophthalmic care. Within healthcare, we will need to focus our resources on the five T's: Utilising technology, multidisciplinary clinical teams with wide professional talents need to work efficiently to reduce patient contact time. With regular testing, this will allow us to reduce the risk further. We also must acknowledge the explosion of different modalities to train our future ophthalmologists and the global challenges and advantages that these bring. Finally, we must not forget the psychological impact that this pandemic will have on ophthalmologists and ancillary staff, and need to have robust mechanisms for support.
Collapse
Affiliation(s)
- Aman Chandra
- Southend University Hospital NHS Foundation Trust, Essex, UK
- Anglia Ruskin University, Essex, UK
| | - Mario R Romano
- Department of Biomedical Science, Humanitas University, Milan, Italy
| | - Daniel Sw Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Daniel L Chao
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
264
|
Seery CW, Betesh S, Guo S, Zarbin MA, Bhagat N, Wagner RS. Update on the Use of Anti-VEGF Drugs in the Treatment of Retinopathy of Prematurity. J Pediatr Ophthalmol Strabismus 2020; 57:351-362. [PMID: 33211892 DOI: 10.3928/01913913-20200824-02] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Accepted: 04/30/2020] [Indexed: 11/20/2022]
Abstract
Retinopathy of prematurity (ROP) is one of the many significant consequences of premature birth and remains one of the leading causes of visual impairment in infants. Originally, cryotherapy was used to prevent the complications of vitreous hemorrhage and retinal detachment. Subsequently, laser photocoagulation, which is at least as effective and possibly safer than cryoretinopexy, was adopted as the primary treatment for type 1 ROP (stage 2 or 3 disease in zone II with plus disease or any stage disease in zone I with plus disease or stage 3 disease in zone I without plus disease). Laser therapy has been proven effective, and has a degree of permanence that is yet to be matched by alternative treatments, but can be associated with significant ocular side effects such as myopia. Treatment of type 1 ROP with anti-vascular endothelial growth factor (VEGF) agents seems to have fewer ocular side effects than laser ablation of the retina, particularly if used to treat type 1 ROP in zone I. However, ROP recurrence is a real threat after anti-VEGF therapy and long-term systemic side effects of this therapy remain under evaluation. This review focuses on the ophthalmic and systemic benefits and risks of anti-VEGF therapies for ROP as compared to retinal photocoagulation. Anti-VEGF therapies have dramatically altered the management of ROP and have also been shown to be beneficial with regard to the visual prognosis of patients with ROP, but patients so treated require frequent short- and long-term follow-up to detect and manage potential complications associated with this form of treatment. Such information also will allow clinicians to characterize the efficacy, side effect profile, and utility of intravitreal anti-VEGF agents for this condition. Prospective studies are needed to identify the optimum anti-VEGF drug and dose. [J Pediatr Ophthalmol Strabismus. 2020;57(6):351-362.].
Collapse
|
265
|
Swain J, VerMilyea MT, Meseguer M, Ezcurra D. AI in the treatment of fertility: key considerations. J Assist Reprod Genet 2020; 37:2817-2824. [PMID: 32989510 PMCID: PMC7642046 DOI: 10.1007/s10815-020-01950-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 09/13/2020] [Indexed: 12/18/2022] Open
Abstract
Artificial intelligence (AI) has been proposed as a potential tool to help address many of the existing problems related with empirical or subjective assessments of clinical and embryological decision points during the treatment of infertility. AI technologies are reviewed and potential areas of implementation of algorithms are discussed, highlighting the importance of following a proper path for the development and validation of algorithms, including regulatory requirements, and the need for ecosystems containing enough quality data to generate it. As evidenced by the consensus of a group of experts in fertility if properly developed, it is believed that AI algorithms may help practitioners from around the globe to standardize, automate, and improve IVF outcomes for the benefit of patients. Collaboration is required between AI developers and healthcare professionals to make this happen.
Collapse
Affiliation(s)
| | | | - Marcos Meseguer
- Instituto Valenciano de Infertilidad (IVI) Valencia, INCLIVA-Universidad de Valencia, Valencia, Spain
| | - Diego Ezcurra
- EMD Serono, One Technology Place, Rockland, MA02370, USA.
| |
Collapse
|
266
|
AI papers in ophthalmology made simple. Eye (Lond) 2020; 34:1947-1949. [DOI: 10.1038/s41433-020-0929-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 04/13/2020] [Accepted: 04/22/2020] [Indexed: 11/08/2022] Open
|
267
|
Choi RY, Brown JM, Kalpathy-Cramer J, Chan RVP, Ostmo S, Chiang MF, Campbell JP. Variability in Plus Disease Identified Using a Deep Learning-Based Retinopathy of Prematurity Severity Scale. Ophthalmol Retina 2020; 4:1016-1021. [PMID: 32380115 PMCID: PMC7867469 DOI: 10.1016/j.oret.2020.04.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 04/21/2020] [Accepted: 04/24/2020] [Indexed: 11/19/2022]
Abstract
PURPOSE Retinopathy of prematurity is a leading cause of childhood blindness worldwide, but clinical diagnosis is subjective, which leads to treatment differences. Our goal was to determine objective differences in the diagnosis of plus disease between clinicians using an automated retinopathy of prematurity (ROP) vascular severity score. DESIGN This retrospective cohort study used data from the Imaging and Informatics in ROP Consortium, which comprises 8 tertiary care centers in North America. Fundus photographs of all infants undergoing ROP screening examinations between July 1, 2011, and December 31, 2016, were obtained. PARTICIPANTS Infants meeting ROP screening criteria who were diagnosed with plus disease and treatment initiated by an examining physician based on ophthalmoscopic examination results. METHODS An ROP severity score (1-9) was generated for each image using a deep learning (DL) algorithm. MAIN OUTCOME MEASURES The mean, median, and range of ROP vascular severity scores overall and for each examiner when the diagnosis of plus disease was made. RESULTS A total of 5255 clinical examinations in 871 babies were analyzed. Of these, 168 eyes were diagnosed with plus disease by 11 different examiners and were included in the study. The mean ± standard deviation vascular severity score for patients diagnosed with plus disease was 7.4 ± 1.9, median was 8.5 (interquartile range, 5.8-8.9), and range was 1.1 to 9.0. Within some examiners, variability in the level of vascular severity diagnosed as plus disease was present, and 1 examiner routinely diagnosed plus disease in patients with less severe disease than the other examiners (P < 0.01). CONCLUSIONS We observed variability both between and within examiners in the diagnosis of plus disease using DL. Prospective evaluation of clinical trial data using an objective measurement of vascular severity may help to define better the minimum necessary level of vascular severity for the diagnosis of plus disease or how other clinical features such as zone, stage, and extent of peripheral disease ought to be incorporated in treatment decisions.
Collapse
Affiliation(s)
- Rene Y Choi
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - James M Brown
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Department of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
268
|
Hartnett ME. Retinopathy of Prematurity: Evolving Treatment With Anti-Vascular Endothelial Growth Factor. Am J Ophthalmol 2020; 218:208-213. [PMID: 32450064 DOI: 10.1016/j.ajo.2020.05.025] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 05/15/2020] [Accepted: 05/16/2020] [Indexed: 12/21/2022]
Abstract
PURPOSE To discuss the evolution in retinopathy of prematurity since its first description as retrolental fibroplasia in the United States, including the changes in the understanding of pathophysiology; methods of diagnosis; destructive, anti-vascular endothelial growth factor (anti-VEGF), and supportive treatments; and differences in retinopathy of prematurity manifestations worldwide. The overall goal is to clarify retinopathy of prematurity currently and formulate questions to optimize future care. STUDY DESIGN Literature review and synthesis. METHODS Critical review and consideration of the literature with inclusion of historical articles and those regarding pathophysiologic risk factors, retinopathy of prematurity worldwide, basic and clinical science particularly regarding anti-VEGF mechanisms and agents tested in clinical trials. RESULTS Retinopathy of prematurity has evolved from affecting infants approximately 2 months premature to affecting extremely premature infants. Worldwide, retinopathy of prematurity differs and, in emerging countries, has features similar to that experienced in the United States when retinopathy of prematurity first manifested. Treatments have evolved from destruction of the peripheral avascular retina to inhibit angiogenic stimuli to anti-VEGF agents, which inhibit pathologic angiogenesis but also extend normal intraretinal angiogenesis by ordering the development of intraretinal vessels. Clinical trial evidence is accruing with the goal to develop less destructive treatments to optimize vision and that are protective to the retina and infant. CONCLUSIONS Goals for retinopathy of prematurity are to optimize prenatal and perinatal care, improve diagnostic acumen worldwide and refine treatment strategies, including with anti-VEGF agents, to inhibit intravitreal angiogenesis and facilitate vascularization of the previously avascular retina, which include supporting neural and vascular development of the premature infant and retina.
Collapse
|
269
|
Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med 2020; 3:118. [PMID: 32984550 PMCID: PMC7486909 DOI: 10.1038/s41746-020-00324-0] [Citation(s) in RCA: 466] [Impact Index Per Article: 93.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Accepted: 08/13/2020] [Indexed: 02/07/2023] Open
Abstract
At the beginning of the artificial intelligence (AI)/machine learning (ML) era, the expectations are high, and experts foresee that AI/ML shows potential for diagnosing, managing and treating a wide variety of medical conditions. However, the obstacles for implementation of AI/ML in daily clinical practice are numerous, especially regarding the regulation of these technologies. Therefore, we provide an insight into the currently available AI/ML-based medical devices and algorithms that have been approved by the US Food & Drugs Administration (FDA). We aimed to raise awareness of the importance of regulatory bodies, clearly stating whether a medical device is AI/ML based or not. Cross-checking and validating all approvals, we identified 64 AI/ML based, FDA approved medical devices and algorithms. Out of those, only 29 (45%) mentioned any AI/ML-related expressions in the official FDA announcement. The majority (85.9%) was approved by the FDA with a 510(k) clearance, while 8 (12.5%) received de novo pathway clearance and one (1.6%) premarket approval (PMA) clearance. Most of these technologies, notably 30 (46.9%), 16 (25.0%), and 10 (15.6%) were developed for the fields of Radiology, Cardiology and Internal Medicine/General Practice respectively. We have launched the first comprehensive and open access database of strictly AI/ML-based medical technologies that have been approved by the FDA. The database will be constantly updated.
Collapse
Affiliation(s)
- Stan Benjamens
- Department of Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | | | - Bertalan Meskó
- The Medical Futurist Institute, Budapest, Hungary
- Department of Behavioural Sciences, Semmelweis University, Budapest, Hungary
| |
Collapse
|
270
|
Rim TH, Lee AY, Ting DS, Teo K, Betzler BK, Teo ZL, Yoo TK, Lee G, Kim Y, Lin AC, Kim SE, Tham YC, Kim SS, Cheng CY, Wong TY, Cheung CMG. Detection of features associated with neovascular age-related macular degeneration in ethnically distinct data sets by an optical coherence tomography: trained deep learning algorithm. Br J Ophthalmol 2020; 105:1133-1139. [PMID: 32907811 DOI: 10.1136/bjophthalmol-2020-316984] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 07/04/2020] [Accepted: 07/28/2020] [Indexed: 12/31/2022]
Abstract
BACKGROUND The ability of deep learning (DL) algorithms to identify eyes with neovascular age-related macular degeneration (nAMD) from optical coherence tomography (OCT) scans has been previously established. We herewith evaluate the ability of a DL model, showing excellent performance on a Korean data set, to generalse onto an American data set despite ethnic differences. In addition, expert graders were surveyed to verify if the DL model was appropriately identifying lesions indicative of nAMD on the OCT scans. METHODS Model development data set-12 247 OCT scans from South Korea; external validation data set-91 509 OCT scans from Washington, USA. In both data sets, normal eyes or eyes with nAMD were included. After internal testing, the algorithm was sent to the University of Washington, USA, for external validation. Area under the receiver operating characteristic curve (AUC) and precision-recall curve (AUPRC) were calculated. For model explanation, saliency maps were generated using Guided GradCAM. RESULTS On external validation, AUC and AUPRC remained high at 0.952 (95% CI 0.942 to 0.962) and 0.891 (95% CI 0.875 to 0.908) at the individual level. Saliency maps showed that in normal OCT scans, the fovea was the main area of interest; in nAMD OCT scans, the appropriate pathological features were areas of model interest. Survey of 10 retina specialists confirmed this. CONCLUSION Our DL algorithm exhibited high performance for nAMD identification in a Korean population, and generalised well to an ethnically distinct, American population. The model correctly focused on the differences within the macular area to extract features associated with nAMD.
Collapse
Affiliation(s)
- Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | - Daniel S Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Kelvin Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | | | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Tea Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, Korea (the Republic of)
| | | | | | - Andrew C Lin
- Department of Ophthalmology, NYU Langone Health, New York University School of Medicine, New York, New York, USA
| | - Seong Eun Kim
- Department of Opthalmology, CHA Bundang Medical Center, CHA Univerisity, Seongnam, South Korea
| | - Yih Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Sung Soo Kim
- Department of Opthalmology, Yonsei University College of Medicine, Severance Hospital, Institute of Vision Research, Seoul, South Korea
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Chui Ming Gemmy Cheung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
271
|
Deep Learning Models for Automated Diagnosis of Retinopathy of Prematurity in Preterm Infants. ELECTRONICS 2020. [DOI: 10.3390/electronics9091444] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Retinopathy of prematurity (ROP) is a disease that can cause blindness in premature infants. It is characterized by immature vascular growth of the retinal blood vessels. However, early detection and treatment of ROP can significantly improve the visual acuity of high-risk patients. Thus, early diagnosis of ROP is crucial in preventing visual impairment. However, several patients refrain from treatment owing to the lack of medical expertise in diagnosing the disease; this is especially problematic considering that the number of ROP cases is on the rise. To this end, we applied transfer learning to five deep neural network architectures for identifying ROP in preterm infants. Our results showed that the VGG19 model outperformed the other models in determining whether a preterm infant has ROP, with 96% accuracy, 96.6% sensitivity, and 95.2% specificity. We also classified the severity of the disease; the VGG19 model showed 98.82% accuracy in predicting the severity of the disease with a sensitivity and specificity of 100% and 98.41%, respectively. We performed 5-fold cross-validation on the datasets to validate the reliability of the VGG19 model and found that the VGG19 model exhibited high accuracy in predicting ROP. These findings could help promote the development of computer-aided diagnosis.
Collapse
|
272
|
|
273
|
Chew EY. Age-related Macular Degeneration: Nutrition, Genes and Deep Learning-The LXXVI Edward Jackson Memorial Lecture. Am J Ophthalmol 2020; 217:335-347. [PMID: 32574780 PMCID: PMC8324084 DOI: 10.1016/j.ajo.2020.05.042] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 05/26/2020] [Accepted: 05/28/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE To evaluate the importance of nutritional supplements, dietary pattern, and genetic associations in age-related macular degeneration (AMD); and to discuss the technique of artificial intelligence/deep learning to potentially enhance research in detecting and classifying AMD. DESIGN Retrospective literature review. METHODS To review the studies of both prospective and retrospective (post hoc) analyses of nutrition, genetic variants, and deep learning in AMD in both the Age-Related Eye Disease Study (AREDS) and AREDS2. RESULTS In addition to demonstrating the beneficial effects of the AREDS and AREDS2 supplements of antioxidant vitamins and zinc (plus copper) for reducing the risk of progression to late AMD, these 2 studies also confirmed the importance of high adherence to Mediterranean diet in reducing progression of AMD in persons with varying severity of disease. In persons with the protective genetic alleles of complement factor H (CFH), the Mediterranean diet had further beneficial effect. However, despite the genetic association with AMD progression, prediction models found genetic information added little to the high predictive value of baseline severity of AMD for disease progression. The technique of deep learning, an arm of artificial intelligence, using color fundus photographs from AREDS/AREDS2 was superior in some cases and noninferior in others to clinical human grading (retinal specialists) and to the gold standard of the certified reading center graders. CONCLUSIONS Counseling individuals affected with AMD regarding the use of the AREDS2 supplements and the beneficial association of the Mediterranean diet is an important public health message. Although genetic testing is important in research, it is not recommended for prediction of disease or to guide therapies and/or dietary interventions in AMD. Techniques in deep learning hold great promise, but further prospective research is required to validate the use of this technique to provide improvement in accuracy and sensitivity/specificity in clinical research and medical management of patients with AMD.
Collapse
Affiliation(s)
- Emily Y Chew
- Clinical Trials Branch, Division of Epidemiology and Clinical Applications, National Eye Institute/National Institutes of Health, Bethesda, Maryland, USA.
| |
Collapse
|
274
|
Azad R, Gilbert C, Gangwe AB, Zhao P, Wu WC, Sarbajna P, Vinekar A. Retinopathy of Prematurity: How to Prevent the Third Epidemics in Developing Countries. Asia Pac J Ophthalmol (Phila) 2020; 9:440-448. [PMID: 32925293 DOI: 10.1097/apo.0000000000000313] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Retinopathy of prematurity (ROP) is vasoproliferative disease affecting preterm infants and is a leading cause of avoidable childhood blindness worldwide. The world is currently experiencing the third epidemic of ROP, where majority of the cases are from middle-income countries. Over 40% of the world's premature infants were born in India, China, Bangladesh, Pakistan, and Indonesia. Together with other neighboring nations, this region has unique challenges in ROP management. Key aspects of the challenges including heavier and more mature infants developing severe ROP. Current strategies include adoption of national screening guidelines, telemedicine, integrating vision rehabilitation and software innovations in the form of artificial intelligence. This review overviews some of these aspects.
Collapse
Affiliation(s)
- Rajvardhan Azad
- Regional institute of Ophthalmology, Indira Gandhi institute of Medical Sciences, Patna, Bihar, India
| | - Claire Gilbert
- London School of Hygiene & Tropical Medicine, London, United Kingdom
| | | | - Peiquan Zhao
- Department of Ophthalmology, Xinhua Hospital, affiliated to Shanghai Jiaotong, University School of Medicine, Shanghai, China
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | | | | |
Collapse
|
275
|
Antaki F, Bachour K, Kim TN, Qian CX. The Role of Telemedicine to Alleviate an Increasingly Burdened Healthcare System: Retinopathy of Prematurity. Ophthalmol Ther 2020; 9:449-464. [PMID: 32562242 PMCID: PMC7406614 DOI: 10.1007/s40123-020-00275-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Indexed: 12/23/2022] Open
Abstract
Telemedicine-based remote digital fundus imaging (RDFI-TM) offers a promising platform for the screening of retinopathy of prematurity. RDFI-TM addresses some of the challenges faced by ophthalmologists in examining this vulnerable population in both low- and high-income countries. In this review, we studied the evidence on the use of RDFI-TM and analyzed the practical framework for RDFI-TM systems. We assessed the novel technological advances that can be deployed within RDFI-TM systems including noncontact imaging systems, smartphone-based imaging tools, and deep learning algorithms.
Collapse
Affiliation(s)
- Fares Antaki
- Department of Ophthalmology, Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, Université de Montréal, Montréal, QC, Canada
| | - Kenan Bachour
- Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Tyson N Kim
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, USA
| | - Cynthia X Qian
- Department of Ophthalmology, Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, Université de Montréal, Montréal, QC, Canada.
| |
Collapse
|
276
|
Liu TYA, Farsiu S, Ting DS. Generative adversarial networks to predict treatment response for neovascular age-related macular degeneration: interesting, but is it useful? Br J Ophthalmol 2020; 104:1629-1630. [PMID: 32862129 DOI: 10.1136/bjophthalmol-2020-316300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 12/23/2022]
Affiliation(s)
- T Y Alvin Liu
- Johns Hopkins Wilmer Eye Institute, Baltimore, Maryland, USA
| | - Sina Farsiu
- Ophthalmology, Duke University, Durham, North Carolina, USA
| | - Daniel S Ting
- Vitreo-Retinal Department, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
277
|
Cho WK, Choi SH. Comparison of Convolutional Neural Network Models for Determination of Vocal Fold Normality in Laryngoscopic Images. J Voice 2020; 36:590-598. [PMID: 32873430 DOI: 10.1016/j.jvoice.2020.08.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 08/04/2020] [Accepted: 08/04/2020] [Indexed: 01/02/2023]
Abstract
OBJECTIVES Deep learning using convolutional neural networks (CNNs) is widely used in medical imaging research. This study was performed to investigate if vocal fold normality in laryngoscopic images can be determined by CNN-based deep learning and to compare accuracy of CNN models and explore the feasibility of application of deep learning on laryngoscopy. METHODS Laryngoscopy videos were screen-captured and each image was cropped to include abducted vocal fold regions. A total of 2216 image (899 normal, 1317 abnormal) were allocated to training, validation, and test sets. Augmentation of training sets was used to train a constructed CNN model with six layers (CNN6), VGG16, Inception V3, and Xception models. Trained models were applied to the test set; for each model, receiver operating characteristic curves and cutoff values were obtained. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were calculated. The best model was employed in video-streams and localization of features was attempted using Grad-CAM. RESULTS All of the trained models showed high area under the receiver operating characteristic curve and the most discriminative cutoff levels of probability of normality were determined to be 35.6%, 61.8%, 13.5%, 39.7% for CNN6, VGG16, Inception V3, and Xception models, respectively. Accuracy of the CNN models selecting normal and abnormal vocal folds in the test set was 82.3%, 99.7%, 99.1%, and 83.8%, respectively. CONCLUSION All four models showed acceptable diagnostic accuracy. Performance of VGG16 and Inception V3 was better than the simple CNN6 model and the recently published Xception model. Real-time classification with a combination of the VGG16 model, OpenCV, and Grad-CAM on a video stream showed the potential clinical applications of the deep learning model in laryngoscopy.
Collapse
Affiliation(s)
- Won Ki Cho
- Departments of Otorhinolaryngology-Head and Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seung-Ho Choi
- Departments of Otorhinolaryngology-Head and Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea.
| |
Collapse
|
278
|
Huang YP, Basanta H, Kang EYC, Chen KJ, Hwang YS, Lai CC, Campbell JP, Chiang MF, Chan RVP, Kusaka S, Fukushima Y, Wu WC. Automated detection of early-stage ROP using a deep convolutional neural network. Br J Ophthalmol 2020; 105:1099-1103. [PMID: 32830123 DOI: 10.1136/bjophthalmol-2020-316526] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 06/21/2020] [Accepted: 07/28/2020] [Indexed: 12/14/2022]
Abstract
BACKGROUND/AIM To automatically detect and classify the early stages of retinopathy of prematurity (ROP) using a deep convolutional neural network (CNN). METHODS This retrospective cross-sectional study was conducted in a referral medical centre in Taiwan. Only premature infants with no ROP, stage 1 ROP or stage 2 ROP were enrolled. Overall, 11 372 retinal fundus images were compiled and split into 10 235 images (90%) for training, 1137 (10%) for validation and 244 for testing. A deep CNN was implemented to classify images according to the ROP stage. Data were collected from December 17, 2013 to May 24, 2019 and analysed from December 2018 to January 2020. The metrics of sensitivity, specificity and area under the receiver operating characteristic curve were adopted to evaluate the performance of the algorithm relative to the reference standard diagnosis. RESULTS The model was trained using fivefold cross-validation, yielding an average accuracy of 99.93%±0.03 during training and 92.23%±1.39 during testing. The sensitivity and specificity scores of the model were 96.14%±0.87 and 95.95%±0.48, 91.82%±2.03 and 94.50%±0.71, and 89.81%±1.82 and 98.99%±0.40 when predicting no ROP versus ROP, stage 1 ROP versus no ROP and stage 2 ROP, and stage 2 ROP versus no ROP and stage 1 ROP, respectively. CONCLUSIONS The proposed system can accurately differentiate among ROP early stages and has the potential to help ophthalmologists classify ROP at an early stage.
Collapse
Affiliation(s)
- Yo-Ping Huang
- Department of Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan.,Department of Information and Communication Engineering, Chaoyang University of Technology, Taichung, Taiwan
| | - Haobijam Basanta
- Department of Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Kuan-Jen Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Yih-Shiou Hwang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chi-Chun Lai
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - John P Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Robison Vernon Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, Chicago, Illinois, USA
| | - Shunji Kusaka
- Department of Ophthalmology, Kindai University, Osaka, Japan
| | - Yoko Fukushima
- Department of Ophthalmology, Osaka University, Osaka, Japan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan .,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
279
|
Hallak JA, Azar DT. The AI Revolution and How to Prepare for It. Transl Vis Sci Technol 2020; 9:16. [PMID: 32818078 PMCID: PMC7395668 DOI: 10.1167/tvst.9.2.16] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 01/09/2020] [Indexed: 01/03/2023] Open
Affiliation(s)
- Joelle A Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Dimitri T Azar
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA.,Alphabet Verily Life Sciences, San Francisco, CA, USA
| |
Collapse
|
280
|
Abstract
PURPOSE OF REVIEW The use of artificial intelligence (AI) in ophthalmology has increased dramatically. However, interpretation of these studies can be a daunting prospect for the ophthalmologist without a background in computer or data science. This review aims to share some practical considerations for interpretation of AI studies in ophthalmology. RECENT FINDINGS It can be easy to get lost in the technical details of studies involving AI. Nevertheless, it is important for clinicians to remember that the fundamental questions in interpreting these studies remain unchanged - What does this study show, and how does this affect my patients? Being guided by familiar principles like study purpose, impact, validity, and generalizability, these studies become more accessible to the ophthalmologist. Although it may not be necessary for nondomain experts to understand the exact AI technical details, we explain some broad concepts in relation to AI technical architecture and dataset management. SUMMARY The expansion of AI into healthcare and ophthalmology is here to stay. AI systems have made the transition from bench to bedside, and are already being applied to patient care. In this context, 'AI education' is crucial for ophthalmologists to be confident in interpretation and translation of new developments in this field to their own clinical practice.
Collapse
|
281
|
Moraru AD, Costin D, Moraru RL, Branisteanu DC. Artificial intelligence and deep learning in ophthalmology - present and future (Review). Exp Ther Med 2020; 20:3469-3473. [PMID: 32905155 PMCID: PMC7465350 DOI: 10.3892/etm.2020.9118] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 06/30/2020] [Indexed: 02/06/2023] Open
Abstract
Since its introduction in 1959, artificial intelligence technology has evolved rapidly and helped benefit research, industries and medicine. Deep learning, as a process of artificial intelligence (AI) is used in ophthalmology for data analysis, segmentation, automated diagnosis and possible outcome predictions. The association of deep learning and optical coherence tomography (OCT) technologies has proven reliable for the detection of retinal diseases and improving the diagnostic performance of the eye's posterior segment diseases. This review explored the possibility of implementing and using AI in establishing the diagnosis of retinal disorders. The benefits and limitations of AI in the field of retinal disease medical management were investigated by analyzing the most recent literature data. Furthermore, the future trends of AI involvement in ophthalmology were analyzed, as AI will be part of the decision-making regarding the scientific investigation, diagnosis and therapeutic management.
Collapse
Affiliation(s)
- Andreea Dana Moraru
- Department of Ophthalmology, 'Grigore T. Popa' University of Medicine and Pharmacy, 700115 Iaşi, Romania.,Department of Ophthalmology, 'N. Oblu' Clinical Hospital, 700309 Iași, Romania
| | - Danut Costin
- Department of Ophthalmology, 'Grigore T. Popa' University of Medicine and Pharmacy, 700115 Iaşi, Romania.,Department of Ophthalmology, 'N. Oblu' Clinical Hospital, 700309 Iași, Romania
| | - Radu Lucian Moraru
- Department of Otorhinolaryngology, Transmed Expert, 700011 Iaşi; 4'Retina Center' Eye Clinic, 700126 Iaşi, Romania
| | - Daniel Constantin Branisteanu
- Department of Ophthalmology, 'Grigore T. Popa' University of Medicine and Pharmacy, 700115 Iaşi, Romania.,'Retina Center' Eye Clinic, 700126 Iași, Romania
| |
Collapse
|
282
|
Rim TH, Soh ZD, Tham YC, Yang HHS, Lee G, Kim Y, Nusinovici S, Ting DSW, Wong TY, Cheng CY. Deep Learning for Automated Sorting of Retinal Photographs. Ophthalmol Retina 2020; 4:793-800. [PMID: 32362553 DOI: 10.1016/j.oret.2020.03.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 02/04/2020] [Accepted: 03/06/2020] [Indexed: 06/11/2023]
Abstract
PURPOSE Though the domain of big data and artificial intelligence in health care continues to evolve, there is a lack of systemic methods to improve data quality and streamline the preparation process. To address this, we aimed to develop an automated sorting system (RetiSort) that accurately labels the type and laterality of retinal photographs. DESIGN Cross-sectional study. PARTICIPANTS RetiSort was developed with retinal photographs from the Singapore Epidemiology of Eye Diseases (SEED) study. METHODS The development of RetiSort was composed of 3 steps: 2 deep-learning (DL) algorithms and 1 rule-based classifier. For step 1, a DL algorithm was developed to locate the optic disc, the "landmark feature." For step 2, based on the location of the optic disc derived from step 1, a rule-based classifier was developed to sort retinal photographs into 3 types: macular-centered, optic disc-centered, or related to other fields. Step 2 concurrently distinguished laterality (i.e., the left or right eye) of macular-centered photographs. For step 3, an additional DL algorithm was developed to differentiate the laterality of disc-centered photographs. Via the 3 steps, RetiSort sorted and labeled retinal images into (1) right macular-centered, (2) left macular-centered, (3) right optic disc-centered, (4) left optic disc-centered, and (5) images relating to other fields. Subsequently, the accuracy of RetiSort was evaluated on 5000 randomly selected retinal images from SEED as well as on 3 publicly available image databases (DIARETDB0, HEI-MED, and Drishti-GS). The main outcome measure was the accuracy for sorting of retinal photographs. RESULTS RetiSort mislabeled 48 out of 5000 retinal images from SEED, representing an overall accuracy of 99.0% (95% confidence interval [CI], 98.7-99.3). In external tests, RetiSort mislabeled 1, 0, and 2 images, respectively, from DIARETDB0, HEI-MED, and Drishti-GS, representing an accuracy of 99.2% (95% CI, 95.8-99.9), 100%, and 98.0% (95% CI, 93.1-99.8), respectively. Saliency maps consistently showed that the DL algorithm in step 3 required pixels in the central left lateral border and optic disc of optic disc-centered retinal photographs to differentiate the laterality. CONCLUSIONS RetiSort is a highly accurate automated sorting system. It can aid in data preparation and has practical applications in DL research that uses retinal photographs.
Collapse
Affiliation(s)
- Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | | | | | | | - Simon Nusinovici
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| |
Collapse
|
283
|
M. G, V. A, S. MM, H. M. Concept attribution: Explaining CNN decisions to physicians. Comput Biol Med 2020; 123:103865. [DOI: 10.1016/j.compbiomed.2020.103865] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 06/12/2020] [Accepted: 06/13/2020] [Indexed: 01/06/2023]
|
284
|
Kapoor S, Eldib A, Hiasat J, Scanga H, Tomasello J, Alabek M, Ament K, Arner D, Benson A, Berret K, Blaha B, Brinza M, Caterino R, Chauhan B, Churchfield W, Fulwylie C, Gruszewski J, Hrinak D, Johnston L, Meyer C, Nanda K, Newton T, Pomycala B, Runkel L, Sanchez K, Skellett S, Steigerwald J, Mitchell E, Pihlblad M, Luchansky C, Keim E, Yu J, Quinn P, Mittal A, Pitetti R, Patil-Chhablani P, Liasis A, Nischal KK. Developing a pediatric ophthalmology telemedicine program in the COVID-19 crisis. J AAPOS 2020; 24:204-208.e2. [PMID: 32890736 PMCID: PMC7467070 DOI: 10.1016/j.jaapos.2020.05.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 05/29/2020] [Accepted: 05/30/2020] [Indexed: 01/10/2023]
Abstract
PURPOSE To describe our methodology for implementing synchronous telemedicine during the 2019 novel coronavirus (COVID-19) pandemic. METHODS A retrospective review of outpatient records at a single children's hospital from March 21 to April 10, 2020, was carried out to determine the outcome of already-scheduled face-to-face outpatient appointments. The week leading up to the March 21, all appointments in the study period were categorized as follows: (1) requiring an in-person visit, (2) face-to-face visit that could be postponed, and (3) consultation required but could be virtual. Teams of administrators, schedulers, and ophthalmic technicians used defined scripts and standardized emails to communicate results of categorization to patients. Flowcharts were devised to schedule and implement telemedicine visits. Informational videos were made accessible on social media to prepare patients for the telemedicine experience. Simultaneously our children's hospital launched a pediatric on-demand e-consult service, the data analytics of which could be used to determine how many visits were eye related. RESULTS A total of 237 virtual ophthalmology consult visits were offered during the study period: 212 were scheduled, and 206 were completed, of which 43 were with new patients and 163 with returning patients. Following the initial virtual visit, another was required on average in 4 weeks by 21 patients; in-person follow-up was required for 170 patients on average 4.6 months after the initial virtual visit. None needed review within 72 hours. The pediatric on-demand service completed 290 visits, of which 25 had eye complaints. CONCLUSIONS With proper materials, technology, and staffing, a telemedicine strategy based on three patient categories can be rapidly implemented to provide continued patient care during pandemic conditions. In our study cohort, the scheduled clinic e-visits had a low no-show rate (3%), and 8% of the on-demand virtual access for pediatric care was eye related.
Collapse
Affiliation(s)
- Saloni Kapoor
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Amgad Eldib
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Jamila Hiasat
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Hannah Scanga
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | | | - Michelle Alabek
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Kellie Ament
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Debbi Arner
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Ashley Benson
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Kristine Berret
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Bianca Blaha
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Melissa Brinza
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Roxanne Caterino
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Baresh Chauhan
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | | | | | - Jessi Gruszewski
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Denise Hrinak
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Lori Johnston
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Cheryl Meyer
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Kaajal Nanda
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Teresa Newton
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Becci Pomycala
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Lauren Runkel
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | | | - Sarah Skellett
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Jess Steigerwald
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Ellen Mitchell
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Matthew Pihlblad
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Craig Luchansky
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Erin Keim
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Jenny Yu
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Patrick Quinn
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Anshul Mittal
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | - Raymond Pitetti
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania
| | | | | | - Ken K Nischal
- UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania.
| |
Collapse
|
285
|
Barrero-Castillero A, Corwin BK, VanderVeen DK, Wang JC. Workforce Shortage for Retinopathy of Prematurity Care and Emerging Role of Telehealth and Artificial Intelligence. Pediatr Clin North Am 2020; 67:725-733. [PMID: 32650869 DOI: 10.1016/j.pcl.2020.04.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Retinopathy of prematurity (ROP) is the leading cause of childhood blindness in very-low-birthweight and very preterm infants in the United States. With improved survival of smaller babies, more infants are at risk for ROP, yet there is an increasing shortage of providers to screen and treat ROP. Through a literature review of new and emerging technologies, screening criteria, and analysis of a national survey of pediatric ophthalmologists and retinal specialists, the authors found the shortage of ophthalmology workforce for ROP a serious and growing concern. When used appropriately, emerging technologies have the potential to mitigate gaps in the ROP workforce.
Collapse
Affiliation(s)
- Alejandra Barrero-Castillero
- Division of Neonatology, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, Rose Building Room 308, Boston, MA 02215, USA; Division of Newborn Medicine, Boston Children's Hospital, Boston, MA, USA.
| | - Brian K Corwin
- Department of Radiology, Cleveland Clinic Foundation, Imaging Institute, 9500 Euclid Avenue - L10, Cleveland, OH 44195, USA
| | - Deborah K VanderVeen
- Department of Ophthalmology, Boston Children's Hospital, 300 Longwood Avenue, Fegan 4, Boston, MA 02115, USA
| | - Jason C Wang
- Center for Policy, Outcomes, and Prevention, Stanford University School of Medicine, 117 Encina Commons, Stanford, CA 94305, USA
| |
Collapse
|
286
|
Tong Y, Lu W, Deng QQ, Chen C, Shen Y. Automated identification of retinopathy of prematurity by image-based deep learning. EYE AND VISION (LONDON, ENGLAND) 2020; 7:40. [PMID: 32766357 PMCID: PMC7395360 DOI: 10.1186/s40662-020-00206-2] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide but can be a treatable retinal disease with appropriate and timely diagnosis. This study was performed to develop a robust intelligent system based on deep learning to automatically classify the severity of ROP from fundus images and detect the stage of ROP and presence of plus disease to enable automated diagnosis and further treatment. METHODS A total of 36,231 fundus images were labeled by 13 licensed retinal experts. A 101-layer convolutional neural network (ResNet) and a faster region-based convolutional neural network (Faster-RCNN) were trained for image classification and identification. We applied a 10-fold cross-validation method to train and optimize our algorithms. The accuracy, sensitivity, and specificity were assessed in a four-degree classification task to evaluate the performance of the intelligent system. The performance of the system was compared with results obtained by two retinal experts. Moreover, the system was designed to detect the stage of ROP and presence of plus disease as well as to highlight lesion regions based on an object detection network using Faster-RCNN. RESULTS The system achieved an accuracy of 0.903 for the ROP severity classification. Specifically, the accuracies in discriminating normal, mild, semi-urgent, and urgent were 0.883, 0.900, 0.957, and 0.870, respectively; the corresponding accuracies of the two experts were 0.902 and 0.898. Furthermore, our model achieved an accuracy of 0.957 for detecting the stage of ROP and 0.896 for detecting plus disease; the accuracies in discriminating stage I to stage V were 0.876, 0.942, 0.968, 0.998 and 0.999, respectively. CONCLUSIONS Our system was able to detect ROP and differentiate four-level classification fundus images with high accuracy and specificity. The performance of the system was comparable to or better than that of human experts, demonstrating that this system could be used to support clinical decisions.
Collapse
Affiliation(s)
- Yan Tong
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Wei Lu
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Qin-qin Deng
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yin Shen
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
- Medical Research Institute, Wuhan University, Wuhan, Hubei China
| |
Collapse
|
287
|
Abstract
PURPOSE OF REVIEW In this article, we review the current state of artificial intelligence applications in retinopathy of prematurity (ROP) and provide insight on challenges as well as strategies for bringing these algorithms to the bedside. RECENT FINDINGS In the past few years, there has been a dramatic shift from machine learning approaches based on feature extraction to 'deep' convolutional neural networks for artificial intelligence applications. Several artificial intelligence for ROP approaches have demonstrated adequate proof-of-concept performance in research studies. The next steps are to determine whether these algorithms are robust to variable clinical and technical parameters in practice. Integration of artificial intelligence into ROP screening and treatment is limited by generalizability of the algorithms to maintain performance on unseen data and integration of artificial intelligence technology into new or existing clinical workflows. SUMMARY Real-world implementation of artificial intelligence for ROP diagnosis will require massive efforts targeted at developing standards for data acquisition, true external validation, and demonstration of feasibility. We must now focus on ethical, technical, clinical, regulatory, and financial considerations to bring this technology to the infant bedside to realize the promise offered by this technology to reduce preventable blindness from ROP.
Collapse
|
288
|
Sorrentino FS, Jurman G, De Nadai K, Campa C, Furlanello C, Parmeggiani F. Application of Artificial Intelligence in Targeting Retinal Diseases. Curr Drug Targets 2020; 21:1208-1215. [PMID: 32640954 DOI: 10.2174/1389450121666200708120646] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Revised: 04/20/2020] [Accepted: 04/20/2020] [Indexed: 01/17/2023]
Abstract
Retinal diseases affect an increasing number of patients worldwide because of the aging population. Request for diagnostic imaging in ophthalmology is ramping up, while the number of specialists keeps shrinking. Cutting-edge technology embedding artificial intelligence (AI) algorithms are thus advocated to help ophthalmologists perform their clinical tasks as well as to provide a source for the advancement of novel biomarkers. In particular, optical coherence tomography (OCT) evaluation of the retina can be augmented by algorithms based on machine learning and deep learning to early detect, qualitatively localize and quantitatively measure epi/intra/subretinal abnormalities or pathological features of macular or neural diseases. In this paper, we discuss the use of AI to facilitate efficacy and accuracy of retinal imaging in those diseases increasingly treated by intravitreal vascular endothelial growth factor (VEGF) inhibitors (i.e. anti-VEGF drugs), also including integration and interpretation features in the process. We review recent advances by AI in diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity that envision a potentially key role of highly automated systems in screening, early diagnosis, grading and individualized therapy. We discuss benefits and critical aspects of automating the evaluation of disease activity, recurrences, the timing of retreatment and therapeutically potential novel targets in ophthalmology. The impact of massive employment of AI to optimize clinical assistance and encourage tailored therapies for distinct patterns of retinal diseases is also discussed.
Collapse
Affiliation(s)
| | - Giuseppe Jurman
- Unit of Predictive Models for Biomedicine and Environment - MPBA, Fondazione Bruno Kessler, Trento, Italy
| | - Katia De Nadai
- Department of Morphology, Surgery and Experimental Medicine, University of Ferrara, Ferrara, Italy
| | - Claudio Campa
- Department of Surgical Specialties, Sant'Anna Hospital, Azienda Ospedaliero Universitaria di Ferrara, Ferrara, Italy
| | - Cesare Furlanello
- Unit of Predictive Models for Biomedicine and Environment - MPBA, Fondazione Bruno Kessler, Trento, Italy
| | - Francesco Parmeggiani
- Department of Morphology, Surgery and Experimental Medicine, University of Ferrara, Ferrara, Italy
| |
Collapse
|
289
|
Lepore D, Ji MH, Pagliara MM, Lenkowicz J, Capocchiano ND, Tagliaferri L, Boldrini L, Valentini V, Damiani A. Convolutional Neural Network Based on Fluorescein Angiography Images for Retinopathy of Prematurity Management. Transl Vis Sci Technol 2020; 9:37. [PMID: 32855841 PMCID: PMC7424905 DOI: 10.1167/tvst.9.2.37] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Accepted: 05/13/2020] [Indexed: 12/18/2022] Open
Abstract
Purpose The purpose of this study was to explore the use of fluorescein angiography (FA) images in a convolutional neural network (CNN) in the management of retinopathy of prematurity (ROP). Methods The dataset involved a total of 835 FA images of 149 eyes (90 patients), where each eye was associated with a binary outcome (57 “untreated” eyes and 92 “treated”; 308 “untreated” images, 527 “treated”). The resolution of the images was 1600 and 1200 px in 20% of cases, whereas the remaining 80% had a resolution of 640 and 480 px. All the images were resized to 640 and 480 px before training and no other preprocessing was applied. A CNN with four convolutional layers was trained on 90% of the images (n = 752) randomly chosen. The accuracy of the prediction was assessed on the remaining 10% of images (n = 83). Keras version 2.2.0 for R with Tensorflow backend version 1.11.0 was used for the analysis. Results The validation accuracy after 100 epochs was 0.88, whereas training accuracy was 0.97. The receiver operating characteristic (ROC) presented an area under the curve (AUC) of 0.91. Conclusions Our study showed, we believe for the first time, the applicability of artificial intelligence (CNN) technology in the ROP management driven by FA. Further studies are needed to exploit different fields of applications of this technology. Translational Relevance This algorithm is the basis for a system that could be applied to both ROP as well as experimental oxygen induced retinopathy.
Collapse
Affiliation(s)
- Domenico Lepore
- Dipartimento di Oftalmologia, Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy
| | - Marco H Ji
- Byers Eye Institute, Horngren Family Vitreoretinal Center, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Monica M Pagliara
- Dipartimento di Oftalmologia, Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy
| | - Jacopo Lenkowicz
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy
| | - Nikola D Capocchiano
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy
| | - Luca Tagliaferri
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy
| | - Luca Boldrini
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy
| | - Vincenzo Valentini
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy
| | - Andrea Damiani
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario "A. Gemelli" IRCCS, Rome, Italy
| | | |
Collapse
|
290
|
Tom E, Keane PA, Blazes M, Pasquale LR, Chiang MF, Lee AY, Lee CS. Protecting Data Privacy in the Age of AI-Enabled Ophthalmology. Transl Vis Sci Technol 2020; 9:36. [PMID: 32855840 PMCID: PMC7424948 DOI: 10.1167/tvst.9.2.36] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 04/02/2020] [Indexed: 12/16/2022] Open
Affiliation(s)
- Elysse Tom
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | - Pearse A Keane
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK.,Institute of Ophthalmology, University College London, London, UK
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | - Louis R Pasquale
- Eye and Vision Research Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Michael F Chiang
- Departments of Ophthalmology and Medical Informatics & Clinical Epidemiology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
291
|
Kim IK, Lee K, Park JH, Baek J, Lee WK. Classification of pachychoroid disease on ultrawide-field indocyanine green angiography using auto-machine learning platform. Br J Ophthalmol 2020; 105:856-861. [PMID: 32620684 DOI: 10.1136/bjophthalmol-2020-316108] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 05/21/2020] [Accepted: 06/08/2020] [Indexed: 01/08/2023]
Abstract
AIMS Automatic identification of pachychoroid maybe used as an adjunctive method to confirm the condition and be of help in treatment for macular diseases. This study investigated the feasibility of classifying pachychoroid disease on ultra-widefield indocyanine green angiography (UWF ICGA) images using an automated machine-learning platform. METHODS Two models were trained with a set including 783 UWF ICGA images of patients with pachychoroid (n=376) and non-pachychoroid (n=349) diseases using the AutoML Vision (Google). Pachychoroid was confirmed using quantitative and qualitative choroidal morphology on multimodal imaging by two retina specialists. Model 1 used the original and Model 2 used images of the left eye horizontally flipped to the orientation of the right eye to increase accuracy by equalising the mirror image of the right eye and left eye. The performances were compared with those of human experts. RESULTS In total, 284, 279 and 220 images of central serous chorioretinopathy, polypoidal choroidal vasculopathy and neovascular age-related maculopathy were included. The precision and recall were 87.84% and 87.84% for Model 1 and 89.19% and 89.19% for Model 2, which were comparable to the results of the retinal specialists (90.91% and 95.24%) and superior to those of ophthalmic residents (68.18% and 92.50%). CONCLUSIONS Auto machine-learning platform can be used in the classification of pachychoroid on UWF ICGA images after careful consideration for pachychoroid definition and limitation of the platform including unstable performance on the medical image.
Collapse
Affiliation(s)
- In Ki Kim
- Department of Ophthalmology, Bucheon St Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Kook Lee
- Department of Ophthalmology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jae Hyun Park
- Department of Ophthalmology, Bucheon St Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Jiwon Baek
- Department of Ophthalmology, Bucheon St Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Won Ki Lee
- Nune Eye Center, Seoul, Republic of Korea
| |
Collapse
|
292
|
Islam MM, Yang HC, Poly TN, Jian WS, Jack Li YC. Deep learning algorithms for detection of diabetic retinopathy in retinal fundus photographs: A systematic review and meta-analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105320. [PMID: 32088490 DOI: 10.1016/j.cmpb.2020.105320] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 12/30/2019] [Accepted: 01/06/2020] [Indexed: 05/13/2023]
Abstract
BACKGROUND Diabetic retinopathy (DR) is one of the leading causes of blindness globally. Earlier detection and timely treatment of DR are desirable to reduce the incidence and progression of vision loss. Currently, deep learning (DL) approaches have offered better performance in detecting DR from retinal fundus images. We, therefore, performed a systematic review with a meta-analysis of relevant studies to quantify the performance of DL algorithms for detecting DR. METHODS A systematic literature search on EMBASE, PubMed, Google Scholar, Scopus was performed between January 1, 2000, and March 31, 2019. The search strategy was based on the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) reporting guidelines, and DL-based study design was mandatory for articles inclusion. Two independent authors screened abstracts and titles against inclusion and exclusion criteria. Data were extracted by two authors independently using a standard form and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool was used for the risk of bias and applicability assessment. RESULTS Twenty-three studies were included in the systematic review; 20 studies met inclusion criteria for the meta-analysis. The pooled area under the receiving operating curve (AUROC) of DR was 0.97 (95%CI: 0.95-0.98), sensitivity was 0.83 (95%CI: 0.83-0.83), and specificity was 0.92 (95%CI: 0.92-0.92). The positive- and negative-likelihood ratio were 14.11 (95%CI: 9.91-20.07), and 0.10 (95%CI: 0.07-0.16), respectively. Moreover, the diagnostic odds ratio for DL models was 136.83 (95%CI: 79.03-236.93). All the studies provided a DR-grading scale, a human grader (e.g. trained caregivers, ophthalmologists) as a reference standard. CONCLUSION The findings of our study showed that DL algorithms had high sensitivity and specificity for detecting referable DR from retinal fundus photographs. Applying a DL-based automated tool of assessing DR from color fundus images could provide an alternative solution to reduce misdiagnosis and improve workflow. A DL-based automated tool offers substantial benefits to reduce screening costs, accessibility to healthcare and ameliorate earlier treatments.
Collapse
Affiliation(s)
- Md Mohaimenul Islam
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan; International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei, Taiwan; Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | - Hsuan-Chia Yang
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan; International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei, Taiwan; Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | - Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan; International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei, Taiwan; Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | - Wen-Shan Jian
- School of Health Care Administration, Taipei Medical University, Taipei, Taiwan.
| | - Yu-Chuan Jack Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan; International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei, Taiwan; Department of Dermatology, Wan Fang Hospital, Taipei, Taiwan; TMU Research Center of Cancer Translational Medicine, Taipei Medical University, Taipei, Taiwan; Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
293
|
Talcott KE, Kim JE, Modi Y, Moshfeghi DM, Singh RP. The American Society of Retina Specialists Artificial Intelligence Task Force Report. JOURNAL OF VITREORETINAL DISEASES 2020; 4:312-319. [PMID: 37009187 PMCID: PMC9976105 DOI: 10.1177/2474126420914168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Artificial intelligence (AI) is a growing area that relies on the heavy use of diagnostic imaging within the field of retina to offer exciting advancements in diagnostic capability to better understand and manage retinal conditions such as diabetic retinopathy, diabetic macular edema, age-related macular degeneration, and retinopathy of prematurity. However, there are discrepancies between the findings of these AI programs and their referral recommendations compared with evidence-based referral patterns, such as Preferred Practice Patterns by the American Academy of Ophthalmology. The overall focus of this task force report is to first describe the work in AI being completed in the management of retinal conditions. This report also discusses the guidelines of the Preferred Practice Pattern and how they can be used in the emerging field of AI.
Collapse
Affiliation(s)
- Katherine E. Talcott
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
- Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Judy E. Kim
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Yasha Modi
- Department of Ophthalmology, New York University, New York, NY, USA
| | - Darius M. Moshfeghi
- Horngren Family Vitreoretinal Center, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Rishi P. Singh
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
- Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| |
Collapse
|
294
|
Wintergerst MWM, Jansen LG, Holz FG, Finger RP. Smartphone-Based Fundus Imaging-Where Are We Now? Asia Pac J Ophthalmol (Phila) 2020; 9:308-314. [PMID: 32694345 DOI: 10.1097/apo.0000000000000303] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
With the advent of smartphone-based fundus imaging (SBFI), a low-cost alternative to conventional digital fundus photography has become available. SBFI allows for a mobile fundus examination, is applicable both with and without pupil dilation, comes with built-in connectivity and post-processing capabilities, and is relatively easy to master. Furthermore, it is delegable to paramedical staff/technicians and, hence, suitable for telemedicine. Against this background a variety of SBFI applications have become available including screening for diabetic retinopathy, glaucoma, and retinopathy of prematurity and its applications in emergency medicine and pediatrics. In addition, SBFI is convenient for teaching purposes and might serve as a surrogate for direct ophthalmoscopy. First wide-field montage techniques are available and the combination of SBFI with machine learning algorithms for image analyses is promising. In conclusion, SBFI has the potential to make fundus examinations and screenings for patients particularly in low- and middle-income settings more accessible and, therefore, aid tackling the burden of diabetic retinopathy, glaucoma, and retinopathy of prematurity screening. However, image quality for SBFI varies substantially and a reference standard for grading appears prudent. In addition, there is a strong need for comparison of different SBFI approaches in terms of applicability to disease screening and cost-effectiveness.
Collapse
|
295
|
He M, Li Z, Liu C, Shi D, Tan Z. Deployment of Artificial Intelligence in Real-World Practice: Opportunity and Challenge. Asia Pac J Ophthalmol (Phila) 2020; 9:299-307. [PMID: 32694344 DOI: 10.1097/apo.0000000000000301] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence has rapidly evolved from the experimental phase to the implementation phase in many image-driven clinical disciplines, including ophthalmology. A combination of the increasing availability of large datasets and computing power with revolutionary progress in deep learning has created unprecedented opportunities for major breakthrough improvements in the performance and accuracy of automated diagnoses that primarily focus on image recognition and feature detection. Such an automated disease classification would significantly improve the accessibility, efficiency, and cost-effectiveness of eye care systems where it is less dependent on human input, potentially enabling diagnosis to be cheaper, quicker, and more consistent. Although this technology will have a profound impact on clinical flow and practice patterns sooner or later, translating such a technology into clinical practice is challenging and requires similar levels of accountability and effectiveness as any new medication or medical device due to the potential problems of bias, and ethical, medical, and legal issues that might arise. The objective of this review is to summarize the opportunities and challenges of this transition and to facilitate the integration of artificial intelligence (AI) into routine clinical practice based on our best understanding and experience in this area.
Collapse
Affiliation(s)
- Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- School of Computer Science, University of Technology Sydney, Ultimo NSW, Australia
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zachary Tan
- Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Schwarzman College, Tsinghua University, Beijing, China
| |
Collapse
|
296
|
Chetoui M, Akhloufi MA. Explainable end-to-end deep learning for diabetic retinopathy detection across multiple datasets. J Med Imaging (Bellingham) 2020; 7:044503. [PMID: 32904519 PMCID: PMC7456641 DOI: 10.1117/1.jmi.7.4.044503] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 08/07/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Diabetic retinopathy (DR) is characterized by retinal lesions affecting people having diabetes for several years. It is one of the leading causes of visual impairment worldwide. To diagnose this disease, ophthalmologists need to manually analyze retinal fundus images. Computer-aided diagnosis systems can help alleviate this burden by automatically detecting DR on retinal images, thus saving physicians' precious time and reducing costs. The objective of this study is to develop a deep learning algorithm capable of detecting DR on retinal fundus images. Nine public datasets and more than 90,000 images are used to assess the efficiency of the proposed technique. In addition, an explainability algorithm is developed to visually show the DR signs detected by the deep model. Approach: The proposed deep learning algorithm fine-tunes a pretrained deep convolutional neural network for DR detection. The model is trained on a subset of EyePACS dataset using a cosine annealing strategy for decaying the learning rate with warm up, thus improving the training accuracy. Tests are conducted on the nine datasets. An explainability algorithm based on gradient-weighted class activation mapping is developed to visually show the signs selected by the model to classify the retina images as DR. Result: The proposed network leads to higher classification rates with an area under curve (AUC) of 0.986, sensitivity = 0.958, and specificity = 0.971 for EyePACS. For MESSIDOR, MESSIDOR-2, DIARETDB0, DIARETDB1, STARE, IDRID, E-ophtha, and UoA-DR, the AUC is 0.963, 0.979, 0.986, 0.988, 0.964, 0.957, 0.984, and 0.990, respectively. Conclusions: The obtained results achieve state-of-the-art performance and outperform past published works relying on training using only publicly available datasets. The proposed approach can robustly classify fundus images and detect DR. An explainability model was developed and showed that our model was able to efficiently identify different signs of DR and detect this health issue.
Collapse
Affiliation(s)
- Mohamed Chetoui
- Université de Moncton, Department of Computer Science, Perception, Robotics, and Intelligent Machines Research Group, Moncton, New Brunswick, Canada
| | - Moulay A. Akhloufi
- Université de Moncton, Department of Computer Science, Perception, Robotics, and Intelligent Machines Research Group, Moncton, New Brunswick, Canada
| |
Collapse
|
297
|
Kim JS, An SH, Kim YK, Kwon YH. Quantification of Vascular Tortuosity by Analyzing Smartphone Fundus Photographs in Patients with Retinopathy of Prematurity. JOURNAL OF THE KOREAN OPHTHALMOLOGICAL SOCIETY 2020. [DOI: 10.3341/jkos.2020.61.6.624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
298
|
Diving Deep into Deep Learning: An Update on Artificial Intelligence in Retina. CURRENT OPHTHALMOLOGY REPORTS 2020; 8:121-128. [PMID: 33224635 DOI: 10.1007/s40135-020-00240-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Purpose of Review In the present article, we will provide an understanding and review of artificial intelligence in the subspecialty of retina and its potential applications within the specialty. Recent Findings Given the significant use of diagnostic imaging within retina, this subspecialty is a fitting area for the incorporation of artificial intelligence. Researchers have aimed at creating models to assist in the diagnosis and management of retinal disease as well as in the prediction of disease course and treatment response. Most of this work thus far has focused on diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity, although other retinal diseases have started to be explored as well. Summary Artificial intelligence is well-suited to transform the practice of ophthalmology. A basic understanding of the technology is important for its effective implementation and growth.
Collapse
|
299
|
Fu H, Li F, Xu Y, Liao J, Xiong J, Shen J, Liu J, Zhang X. A Retrospective Comparison of Deep Learning to Manual Annotations for Optic Disc and Optic Cup Segmentation in Fundus Photographs. Transl Vis Sci Technol 2020; 9:33. [PMID: 32832206 PMCID: PMC7414704 DOI: 10.1167/tvst.9.2.33] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 04/22/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Optic disc (OD) and optic cup (OC) segmentation are fundamental for fundus image analysis. Manual annotation is time consuming, expensive, and highly subjective, whereas an automated system is invaluable to the medical community. The aim of this study is to develop a deep learning system to segment OD and OC in fundus photographs, and evaluate how the algorithm compares against manual annotations. Methods A total of 1200 fundus photographs with 120 glaucoma cases were collected. The OD and OC annotations were labeled by seven licensed ophthalmologists, and glaucoma diagnoses were based on comprehensive evaluations of the subject medical records. A deep learning system for OD and OC segmentation was developed. The performances of segmentation and glaucoma discriminating based on the cup-to-disc ratio (CDR) of automated model were compared against the manual annotations. Results The algorithm achieved an OD dice of 0.938 (95% confidence interval [CI] = 0.934–0.941), OC dice of 0.801 (95% CI = 0.793–0.809), and CDR mean absolute error (MAE) of 0.077 (95% CI = 0.073 mean absolute error (MAE)0.082). For glaucoma discriminating based on CDR calculations, the algorithm obtained an area under receiver operator characteristic curve (AUC) of 0.948 (95% CI = 0.920 mean absolute error (MAE)0.973), with a sensitivity of 0.850 (95% CI = 0.794–0.923) and specificity of 0.853 (95% CI = 0.798–0.918). Conclusions We demonstrated the potential of the deep learning system to assist ophthalmologists in analyzing OD and OC segmentation and discriminating glaucoma from nonglaucoma subjects based on CDR calculations. Translational Relevance We investigate the segmentation of OD and OC by deep learning system compared against the manual annotations.
Collapse
Affiliation(s)
- Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yanwu Xu
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Jingan Liao
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - Jian Xiong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jianbing Shen
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Guangzhou, Guangdong, China.,Cixi Institute of Biomedical Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | | |
Collapse
|
300
|
Greenwald MF, Danford ID, Shahrawat M, Ostmo S, Brown J, Kalpathy-Cramer J, Bradshaw K, Schelonka R, Cohen HS, Chan RVP, Chiang MF, Campbell JP. Evaluation of artificial intelligence-based telemedicine screening for retinopathy of prematurity. J AAPOS 2020; 24:160-162. [PMID: 32289490 PMCID: PMC7508795 DOI: 10.1016/j.jaapos.2020.01.014] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 01/25/2020] [Accepted: 01/28/2020] [Indexed: 12/01/2022]
Abstract
Retrospective evaluation of a deep learning-derived retinopathy of prematurity (ROP) vascular severity score in an operational ROP screening program demonstrated high diagnostic performance for detection of type 2 or worse ROP. To our knowledge, this is the first report in the literature that evaluated the use of artificial intelligence for ROP screening and represents a proof of concept. With further prospective validation, this technology might improve the accuracy, efficiency, and objectivity of diagnosis and facilitate earlier detection of disease progression in patients with potentially blinding ROP.
Collapse
Affiliation(s)
- Miles F Greenwald
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Ian D Danford
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Malika Shahrawat
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - James Brown
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston
| | | | - Kacy Bradshaw
- Department of Pediatrics, Salem Hospital, Salem, Oregon
| | - Robert Schelonka
- Department of Pediatrics, Oregon Health & Science University, Portland
| | | | - R V Paul Chan
- Department of Ophthalmology, University of Illinois-Chicago
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland; Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University, Portland
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland.
| |
Collapse
|