1
|
Kubin AM, Huhtinen P, Ohtonen P, Keskitalo A, Wirkkala J, Hautala N. Comparison of 21 artificial intelligence algorithms in automated diabetic retinopathy screening using handheld fundus camera. Ann Med 2024; 56:2352018. [PMID: 38738798 PMCID: PMC11095279 DOI: 10.1080/07853890.2024.2352018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 04/21/2024] [Indexed: 05/14/2024] Open
Abstract
BACKGROUND Diabetic retinopathy (DR) is a common complication of diabetes and may lead to irreversible visual loss. Efficient screening and improved treatment of both diabetes and DR have amended visual prognosis for DR. The number of patients with diabetes is increasing and telemedicine, mobile handheld devices and automated solutions may alleviate the burden for healthcare. We compared the performance of 21 artificial intelligence (AI) algorithms for referable DR screening in datasets taken by handheld Optomed Aurora fundus camera in a real-world setting. PATIENTS AND METHODS Prospective study of 156 patients (312 eyes) attending DR screening and follow-up. Both papilla- and macula-centred 50° fundus images were taken from each eye. DR was graded by experienced ophthalmologists and 21 AI algorithms. RESULTS Most eyes, 183 out of 312 (58.7%), had no DR and mild NPDR was noted in 21 (6.7%) of the eyes. Moderate NPDR was detected in 66 (21.2%) of the eyes, severe NPDR in 1 (0.3%), and PDR in 41 (13.1%) composing a group of 34.6% of eyes with referable DR. The AI algorithms achieved a mean agreement of 79.4% for referable DR, but the results varied from 49.4% to 92.3%. The mean sensitivity for referable DR was 77.5% (95% CI 69.1-85.8) and specificity 80.6% (95% CI 72.1-89.2). The rate for images ungradable by AI varied from 0% to 28.2% (mean 1.9%). Nineteen out of 21 (90.5%) AI algorithms resulted in grading for DR at least in 98% of the images. CONCLUSIONS Fundus images captured with Optomed Aurora were suitable for DR screening. The performance of the AI algorithms varied considerably emphasizing the need for external validation of screening algorithms in real-world settings before their clinical application.
Collapse
Affiliation(s)
- Anna-Maria Kubin
- Department of Ophthalmology, Oulu University Hospital, Oulu, Finland
- Research Unit of Clinical Medicine, Oulu, Finland
- Medical Research Center, University of Oulu, Oulu, Finland
| | | | - Pasi Ohtonen
- Research Service Unit, Oulu, Finland
- The Research Unit of Surgery, Anesthesia and Intensive Care, Oulu University Hospital and University of Oulu, Oulu, Finland
| | - Antti Keskitalo
- Department of Ophthalmology, Oulu University Hospital, Oulu, Finland
| | - Joonas Wirkkala
- Department of Ophthalmology, Oulu University Hospital, Oulu, Finland
- Research Unit of Clinical Medicine, Oulu, Finland
- Medical Research Center, University of Oulu, Oulu, Finland
| | - Nina Hautala
- Department of Ophthalmology, Oulu University Hospital, Oulu, Finland
- Research Unit of Clinical Medicine, Oulu, Finland
- Medical Research Center, University of Oulu, Oulu, Finland
| |
Collapse
|
2
|
Larsen TJ, Pettersen MB, Nygaard Jensen H, Lynge Pedersen M, Lund-Andersen H, Jørgensen ME, Byberg S. The use of artificial intelligence to assess diabetic eye disease among the Greenlandic population. Int J Circumpolar Health 2024; 83:2314802. [PMID: 38359160 PMCID: PMC10877649 DOI: 10.1080/22423982.2024.2314802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/01/2024] [Indexed: 02/17/2024] Open
Abstract
Background: Retina fundus images conducted in Greenland are telemedically assessed for diabetic retinopathy by ophthalmological nurses in Denmark. Applying an AI grading solution, in a Greenlandic setting, could potentially improve the efficiency and cost-effectiveness of DR screening.Method: We developed an AI model using retina fundus photos, performed on persons registered with diabetes in Greenland and Denmark, using Optos® ultra wide-field scanning laser ophthalmoscope, graded according to ICDR.Using the ResNet50 network we compared the model's ability to distinguish between different images of ICDR severity levels in a confusion matrix.Results: Comparing images with ICDR level 0 to images of ICDR level 4 resulted in an accuracy of 0.9655, AUC of 0.9905, sensitivity and specificity of 96.6%.Comparing ICDR levels 0,1,2 with ICDR levels 3,4, we achieved a performance with an accuracy of 0.8077, an AUC of 0.8728, a sensitivity of 84.6% and a specificity of 78.8%. For the other comparisons, we achieved a modest performance.Conclusion: We developed an AI model using Greenlandic data, to automatically detect DR on Optos retina fundus images. The sensitivity and specificity were too low for our model to be applied directly in a clinical setting, thus optimising the model should be prioritised.
Collapse
Affiliation(s)
- Trine Jul Larsen
- Greenland Center of Health Research, Institute of Nursing and Health Science, University of Greenland, Nuuk, Greenland
| | | | | | - Michael Lynge Pedersen
- Greenland Center of Health Research, Institute of Nursing and Health Science, University of Greenland, Nuuk, Greenland
- Rigshospitalet-Glostrup University Hospital, Glostrup, Denmark
| | - Henrik Lund-Andersen
- Clinical Epidemiology, Steno Diabetes Center Copenhagen, Copenhagen, Denmark
- Rigshospitalet-Glostrup University Hospital, Glostrup, Denmark
| | | | - Stine Byberg
- Clinical Epidemiology, Steno Diabetes Center Copenhagen, Copenhagen, Denmark
| |
Collapse
|
3
|
Chen D, Geevarghese A, Lee S, Plovnick C, Elgin C, Zhou R, Oermann E, Aphinyonaphongs Y, Al-Aswad LA. Transparency in Artificial Intelligence Reporting in Ophthalmology-A Scoping Review. Ophthalmol Sci 2024; 4:100471. [PMID: 38591048 PMCID: PMC11000111 DOI: 10.1016/j.xops.2024.100471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/18/2023] [Accepted: 01/12/2024] [Indexed: 04/10/2024]
Abstract
Topic This scoping review summarizes artificial intelligence (AI) reporting in ophthalmology literature in respect to model development and validation. We characterize the state of transparency in reporting of studies prospectively validating models for disease classification. Clinical Relevance Understanding what elements authors currently describe regarding their AI models may aid in the future standardization of reporting. This review highlights the need for transparency to facilitate the critical appraisal of models prior to clinical implementation, to minimize bias and inappropriate use. Transparent reporting can improve effective and equitable use in clinical settings. Methods Eligible articles (as of January 2022) from PubMed, Embase, Web of Science, and CINAHL were independently screened by 2 reviewers. All observational and clinical trial studies evaluating the performance of an AI model for disease classification of ophthalmic conditions were included. Studies were evaluated for reporting of parameters derived from reporting guidelines (CONSORT-AI, MI-CLAIM) and our previously published editorial on model cards. The reporting of these factors, which included basic model and dataset details (source, demographics), and prospective validation outcomes, were summarized. Results Thirty-seven prospective validation studies were included in the scoping review. Eleven additional associated training and/or retrospective validation studies were included if this information could not be determined from the primary articles. These 37 studies validated 27 unique AI models; multiple studies evaluated the same algorithms (EyeArt, IDx-DR, and Medios AI). Details of model development were variably reported; 18 of 27 models described training dataset annotation and 10 of 27 studies reported training data distribution. Demographic information of training data was rarely reported; 7 of the 27 unique models reported age and gender and only 2 reported race and/or ethnicity. At the level of prospective clinical validation, age and gender of populations was more consistently reported (29 and 28 of 37 studies, respectively), but only 9 studies reported race and/or ethnicity data. Scope of use was difficult to discern for the majority of models. Fifteen studies did not state or imply primary users. Conclusion Our scoping review demonstrates variable reporting of information related to both model development and validation. The intention of our study was not to assess the quality of the factors we examined, but to characterize what information is, and is not, regularly reported. Our results suggest the need for greater transparency in the reporting of information necessary to determine the appropriateness and fairness of these tools prior to clinical use. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York, New York
| | | | - Samuel Lee
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, New York
| | | | - Cansu Elgin
- Department of Ophthalmology, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Raymond Zhou
- Department of Neurosurgery, Vanderbilt School of Medicine, Nashville, Tennessee
| | - Eric Oermann
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, New York
- Department of Neurosurgery, NYU Langone Health, New York, New York
| | - Yindalon Aphinyonaphongs
- Department of Medicine, NYU Langone Health, New York, New York
- Department of Population Health, NYU Grossman School of Medicine, New York, New York
| | - Lama A. Al-Aswad
- Department of Ophthalmology, NYU Langone Health, New York, New York
- Department of Population Health, NYU Grossman School of Medicine, New York, New York
| |
Collapse
|
4
|
Assié G, Allassonnière S. Artificial Intelligence in Endocrinology: On Track Toward Great Opportunities. J Clin Endocrinol Metab 2024; 109:e1462-e1467. [PMID: 38466742 DOI: 10.1210/clinem/dgae154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 02/13/2024] [Accepted: 03/08/2024] [Indexed: 03/13/2024]
Abstract
In endocrinology, the types and quantity of digital data are increasing rapidly. Computing capabilities are also developing at an incredible rate, as illustrated by the recent expansion in the use of popular generative artificial intelligence (AI) applications. Numerous diagnostic and therapeutic devices using AI have already entered routine endocrine practice, and developments in this field are expected to continue to accelerate. Endocrinologists will need to be supported in managing AI applications. Beyond technological training, interdisciplinary vision is needed to encompass the ethical and legal aspects of AI, to manage the profound impact of AI on patient/provider relationships, and to maintain an optimal balance between human input and AI in endocrinology.
Collapse
Affiliation(s)
- Guillaume Assié
- Université Paris Cité, CNRS UMR8104, INSERM U1016, Institut Cochin, F-75014 Paris, France
- Service d'endocrinologie, Center for Rare Adrenal Diseases, Assistance Publique-Hôpitaux de Paris, Hôpital Cochin, 75014 Paris, France
| | - Stéphanie Allassonnière
- Université Paris Cité, UFR Medecine, 75006 Paris, France
- HeKA INSERM, INRIA Paris, Centre de Recherche des Cordeliers Paris, Université Paris Cité, 75006 Paris, France
| |
Collapse
|
5
|
Salongcay RP, Aquino LAC, Alog GP, Locaylocay KB, Saunar AV, Peto T, Silva PS. Accuracy of Integrated Artificial Intelligence Grading Using Handheld Retinal Imaging in a Community Diabetic Eye Screening Program. Ophthalmol Sci 2024; 4:100457. [PMID: 38317871 PMCID: PMC10838904 DOI: 10.1016/j.xops.2023.100457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/08/2023] [Accepted: 12/11/2023] [Indexed: 02/07/2024]
Abstract
Purpose To evaluate mydriatic handheld retinal imaging performance assessed by point-of-care (POC) artificial intelligence (AI) as compared with retinal image graders at a centralized reading center (RC) in identifying diabetic retinopathy (DR) and diabetic macular edema (DME). Design Prospective, comparative study. Subjects Five thousand five hundred eighty-five eyes from 2793 adult patients with diabetes. Methods Point-of-care AI assessment of disc and macular handheld retinal images was compared with RC evaluation of validated 5-field handheld retinal images (disc, macula, superior, inferior, and temporal) in identifying referable DR (refDR; defined as moderate nonproliferative DR [NPDR], or worse, or any level of DME) and vision-threatening DR (vtDR; defined as severe NPDR or worse, or any level of center-involving DME [ciDME]). Reading center evaluation of the 5-field images followed the international DR/DME classification. Sensitivity (SN) and specificity (SP) for ungradable images, refDR, and vtDR were calculated. Main Outcome Measures Agreement for DR and DME; SN and SP for refDR, vtDR, and ungradable images. Results Diabetic retinopathy severity by RC evaluation: no DR, 67.3%; mild NPDR, 9.7%; moderate NPDR, 8.6%; severe NPDR, 4.8%; proliferative DR, 3.8%; and ungradable, 5.8%. Diabetic macular edema severity by RC evaluation was as follows: no DME (80.4%), non-ciDME (7.7%), ciDME (4.4%), and ungradable (7.5%). Referable DR was present in 25.3% and vtDR was present in 17.5% of eyes. Images were ungradable for DR or DME in 7.5% by RC evaluation and 15.4% by AI. There was substantial agreement between AI and RC for refDR (κ = 0.66) and moderate agreement for vtDR (κ = 0.54). The SN/SP of AI grading compared with RC evaluation was 0.86/0.86 for refDR and 0.92/0.80 for vtDR. Conclusions This study demonstrates that POC AI following a defined handheld retinal imaging protocol at the time of imaging has SN and SP for refDR that meets the current United States Food and Drug Administration thresholds of 85% and 82.5%, but not for vtDR. Integrating AI at the POC could substantially reduce centralized RC burden and speed information delivery to the patient, allowing more prompt eye care referral. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Recivall P. Salongcay
- Centre for Public Health, Queen’s University Belfast, Belfast, United Kingdom
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
| | - Lizzie Anne C. Aquino
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
| | - Glenn P. Alog
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
| | - Kaye B. Locaylocay
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
| | - Aileen V. Saunar
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
| | - Tunde Peto
- Centre for Public Health, Queen’s University Belfast, Belfast, United Kingdom
| | - Paolo S. Silva
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
6
|
Tomić M, Vrabec R, Ljubić S, Prkačin I, Bulum T. Patients with Type 2 Diabetes, Higher Blood Pressure, and Infrequent Fundus Examinations Have a Higher Risk of Sight-Threatening Retinopathy. J Clin Med 2024; 13:2496. [PMID: 38731024 PMCID: PMC11084692 DOI: 10.3390/jcm13092496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 03/28/2024] [Accepted: 04/21/2024] [Indexed: 05/13/2024] Open
Abstract
Background: Diabetic retinopathy (DR) is the most common cause of preventable blindness among working-age adults. This study aimed to evaluate the impact of the regularity of fundus examinations and risk factor control in patients with type 2 diabetes (T2DM) on the prevalence and severity of DR. Methods: One hundred and fifty-six T2DM patients were included in this cross-sectional study. Results: In this sample, the prevalence of DR was 46.2%. Patients with no DR mainly did not examine the fundus regularly, while most patients with mild/moderate nonproliferative DR (NPDR) underwent a fundus examination regularly. In 39.7% of patients, this was the first fundus examination due to diabetes, and 67% of them had sight-threatening DR (STDR). Diabetes duration (p = 0.007), poor glycemic control (HbA1c) (p = 0.006), higher systolic blood pressure (SBP) (p < 0.001), and diastolic blood pressure (DBP) (p = 0.002) were the main predictors of DR. However, the impact of SBP (AOR 1.07, p = 0.003) and DBP (AOR 1.13, p = 0.005) on DR development remained significant even after adjustment for diabetes duration and HbA1c. The DR prevalence was higher in patients with higher blood pressure (≥130/80 mmHg) than in those with target blood pressure (<130/80 mmHg) (p = 0.043). None of the patients with target blood pressure had STDR. The peaks in SBP and DBP were observed in T2DM with DR and the first fundus examination due to diabetes. Conclusions: In this T2DM sample, DR prevalence was very high and strongly related to blood pressure and a lack of regular fundus examinations. These results indicate the necessity of establishing systematic DR screening in routine diabetes care and targeting blood pressure levels according to T2DM guidelines.
Collapse
Affiliation(s)
- Martina Tomić
- Department of Ophthalmology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, 10000 Zagreb, Croatia
| | - Romano Vrabec
- Department of Ophthalmology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, 10000 Zagreb, Croatia
| | - Spomenka Ljubić
- Department of Diabetes and Endocrinology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, 10000 Zagreb, Croatia
- School of Medicine, University of Zagreb, Šalata 3, 10000 Zagreb, Croatia
| | - Ingrid Prkačin
- School of Medicine, University of Zagreb, Šalata 3, 10000 Zagreb, Croatia
- Department of Internal Medicine, Merkur University Hospital, 10000 Zagreb, Croatia
| | - Tomislav Bulum
- Department of Diabetes and Endocrinology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, 10000 Zagreb, Croatia
- School of Medicine, University of Zagreb, Šalata 3, 10000 Zagreb, Croatia
| |
Collapse
|
7
|
Liu TYA, Huang J, Channa R, Wolf R, Dong Y, Liang M, Wang J, Abramoff M. Autonomous Artificial Intelligence Increases Access and Health Equity in Underserved Populations with Diabetes. Res Sq 2024:rs.3.rs-3979992. [PMID: 38559222 PMCID: PMC10980149 DOI: 10.21203/rs.3.rs-3979992/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Diabetic eye disease (DED) is a leading cause of blindness in the world. Early detection and treatment of DED have been shown to be both sight-saving and cost-effective. As such, annual testing for DED is recommended for adults with diabetes and is a Healthcare Effectiveness Data and Information Set (HEDIS) measure. However, adherence to this guideline has historically been low, and access to this sight-saving intervention has particularly been limited for specific populations, such as Black or African American patients. In 2018, the US Food and Drug Agency (FDA) De Novo cleared autonomous artificial intelligence (AI) for diagnosing DED in a primary care setting. In 2020, Johns Hopkins Medicine (JHM), an integrated healthcare system with over 30 primary care sites, began deploying autonomous AI for DED testing in some of its primary care clinics. In this retrospective study, we aimed to determine whether autonomous AI implementation was associated with increased adherence to annual DED testing, and whether this was different for specific populations. JHM primary care sites were categorized as "non-AI" sites (sites with no autonomous AI deployment over the study period and where patients are referred to eyecare for DED testing) or "AI-switched" sites (sites that did not have autonomous AI testing in 2019 but did by 2021). We conducted a difference-in-difference analysis using a logistic regression model to compare change in adherence rates from 2019 to 2021 between non-AI and AI-switched sites. Our study included all adult patients with diabetes managed within our health system (17,674 patients for the 2019 cohort and 17,590 patients for the 2021 cohort) and has three major findings. First, after controlling for a wide range of potential confounders, our regression analysis demonstrated that the odds ratio of adherence at AI-switched sites was 36% higher than that of non-AI sites, suggesting that there was a higher increase in DED testing between 2019 and 2021 at AI-switched sites than at non-AI sites. Second, our data suggested autonomous AI improved access for historically disadvantaged populations. The adherence rate for Black/African Americans increased by 11.9% within AI-switched sites whereas it decreased by 1.2% within non-AI sites over the same time frame. Third, the data suggest that autonomous AI improved health equity by closing care gaps. For example, in 2019, a large adherence rate gap existed between Asian Americans and Black/African Americans (61.1% vs. 45.5%). This 15.6% gap shrank to 3.5% by 2021. In summary, our real-world deployment results in a large integrated healthcare system suggest that autonomous AI improves adherence to a HEDIS measure, patient access, and health equity for patients with diabetes - particularly in historically disadvantaged patient groups. While our findings are encouraging, they will need to be replicated and validated in a prospective manner across more diverse settings.
Collapse
Affiliation(s)
| | | | | | - Risa Wolf
- Johns Hopkins University School of Medicine
| | | | | | | | | |
Collapse
|
8
|
Shou BL, Venkatesh K, Chen C, Ghidey R, Lee JH, Wang J, Channa R, Wolf RM, Abramoff MD, Liu TYA. Risk Factors for Nondiagnostic Imaging in a Real-World Deployment of Artificial Intelligence Diabetic Retinal Examinations in an Integrated Healthcare System: Maximizing Workflow Efficiency Through Predictive Dilation. J Diabetes Sci Technol 2024; 18:302-308. [PMID: 37798955 DOI: 10.1177/19322968231201654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/07/2023]
Abstract
OBJECTIVE In the pivotal clinical trial that led to Food and Drug Administration De Novo "approval" of the first fully autonomous artificial intelligence (AI) diabetic retinal disease diagnostic system, a reflexive dilation protocol was used. Using real-world deployment data before implementation of reflexive dilation, we identified factors associated with nondiagnostic results. These factors allow a novel predictive dilation workflow, where patients most likely to benefit from pharmacologic dilation are dilated a priori to maximize efficiency and patient satisfaction. METHODS Retrospective review of patients who were assessed with autonomous AI at Johns Hopkins Medicine (8/2020 to 5/2021). We constructed a multivariable logistic regression model for nondiagnostic results to compare characteristics of patients with and without diagnostic results, using adjusted odds ratio (aOR). P < .05 was considered statistically significant. RESULTS Of 241 patients (59% female; median age = 59), 123 (51%) had nondiagnostic results. In multivariable analysis, type 1 diabetes (T1D, aOR = 5.82, 95% confidence interval [CI]: 1.45-23.40, P = .01), smoking (aOR = 2.86, 95% CI: 1.36-5.99, P = .005), and age (every 10-year increase, aOR = 2.12, 95% CI: 1.62-2.77, P < .001) were associated with nondiagnostic results. Following feature elimination, a predictive model was created using T1D, smoking, age, race, sex, and hypertension as inputs. The model showed an area under the receiver-operator characteristics curve of 0.76 in five-fold cross-validation. CONCLUSIONS We used factors associated with nondiagnostic results to design a novel, predictive dilation workflow, where patients most likely to benefit from pharmacologic dilation are dilated a priori. This new workflow has the potential to be more efficient than reflexive dilation, thus maximizing the number of at-risk patients receiving their diabetic retinal examinations.
Collapse
Affiliation(s)
- Benjamin L Shou
- School of Medicine, The Johns Hopkins University, Baltimore, MD, USA
| | - Kesavan Venkatesh
- Whiting School of Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Chang Chen
- Bloomberg School of Public Health, The Johns Hopkins University, Baltimore, MD, USA
| | - Ronel Ghidey
- Bloomberg School of Public Health, The Johns Hopkins University, Baltimore, MD, USA
| | - Jae Hyoung Lee
- Bloomberg School of Public Health, The Johns Hopkins University, Baltimore, MD, USA
| | - Jiangxia Wang
- Bloomberg School of Public Health, The Johns Hopkins University, Baltimore, MD, USA
| | - Roomasa Channa
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison, Madison, WI, USA
| | - Risa M Wolf
- Department of Pediatrics, Division of Pediatric Endocrinology, The Johns Hopkins University, Baltimore, MD, USA
| | - Michael D Abramoff
- Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, IA, USA
| | - T Y Alvin Liu
- Wilmer Eye Institute, The Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
9
|
Brennan IG, Kelly SR, McBride E, Garrahy D, Acheson R, Harmon J, McMahon S, Keegan DJ, Kavanagh H, O’Toole L. Addressing Technical Failures in a Diabetic Retinopathy Screening Program. Clin Ophthalmol 2024; 18:431-440. [PMID: 38356695 PMCID: PMC10864767 DOI: 10.2147/opth.s442414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 01/18/2024] [Indexed: 02/16/2024] Open
Abstract
Purpose Diabetic retinopathy (DR) is a preventable cause of blindness detectable through screening using retinal digital photography. The Irish National Diabetic Retina Screening (DRS) programme, Diabetic RetinaScreen, provides free screening services to patients with diabetes from aged 12 years and older. A technical failure (TF) occurs when digital retinal imaging is ungradable, resulting in delays in the diagnosis and treatment of sight-threatening disease. Despite their impact, the causes of TFs, and indeed the utility of interventions to prevent them, have not been extensively examined. Aim Primary analysis aimed to identify factors associated with TF. Secondary analysis examined a subset of cases, assessing patient data from five time points between 2019 and 2021 to identify photographer/patient factors associated with TF. Methods Patient data from the DRS database for one provider were extracted for analysis between 2018 and 2022. Information on patient demographics, screening results, and other factors previously associated with TF were analyzed. Primary analysis involved using mixed-effects logistic regression models with nested patient-eye random effects. Secondary analysis reviewed a subset of cases in detail, checking for causes of TF. Results The primary analysis included a total of 366,528 appointments from 104,407 patients over 5 years. Most patients had Type 2 diabetes (89.2%), and the overall TF rate was 4.9%. Diabetes type and duration, dilate pupil status, and the presence of lens artefacts on the camera were significantly associated with TF. The Secondary analysis identified the primary cause of TF was found to be optically dense cataracts, accounting for over half of the TFs. Conclusion This study provides insight into the causes of TF within the Irish DRS program, highlighting cataracts as the primary contributing factor. The identification of patient-level factors associated with TF facilitates appropriate interventions that can be put in place to improve patient outcomes and minimize delays in treatment and diagnosis.
Collapse
Affiliation(s)
- Ian Gerard Brennan
- Diabetic RetinaScreen, National Screening Service, Health Service Executive, Dublin, Ireland
| | - Stephen R Kelly
- Diabetic RetinaScreen, National Screening Service, Health Service Executive, Dublin, Ireland
| | - Edel McBride
- Diabetic Retinal Screening Service, NEC Care, Cork City, Co. Cork, Ireland
| | - Darragh Garrahy
- Diabetic RetinaScreen, National Screening Service, Health Service Executive, Dublin, Ireland
| | - Robert Acheson
- Diabetic Retinal Screening Service, NEC Care, Cork City, Co. Cork, Ireland
| | - Joanne Harmon
- Diabetic Retinal Screening Service, NEC Care, Cork City, Co. Cork, Ireland
| | - Shane McMahon
- Diabetic Retinal Screening Service, NEC Care, Cork City, Co. Cork, Ireland
| | - David J Keegan
- Diabetic RetinaScreen, National Screening Service, Health Service Executive, Dublin, Ireland
| | - Helen Kavanagh
- Diabetic RetinaScreen, National Screening Service, Health Service Executive, Dublin, Ireland
| | - Louise O’Toole
- Diabetic Retinal Screening Service, NEC Care, Cork City, Co. Cork, Ireland
| |
Collapse
|
10
|
Feng L, Zhang Y, Wei W, Qiu H, Shi M. Applying deep learning to recognize the properties of vitreous opacity in ophthalmic ultrasound images. Eye (Lond) 2024; 38:380-385. [PMID: 37596401 PMCID: PMC10810903 DOI: 10.1038/s41433-023-02705-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 07/20/2023] [Accepted: 08/09/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND To explore the feasibility of artificial intelligence technology based on deep learning to automatically recognize the properties of vitreous opacities in ophthalmic ultrasound images. METHODS A total of 2000 greyscale Doppler ultrasound images containing non-pathological eye and three typical vitreous opacities confirmed as physiological vitreous opacity (VO), asteroid hyalosis (AH), and vitreous haemorrhage (VH) were selected and labelled for each lesion type. Five residual networks (ResNet) and two GoogLeNet models were trained to recognize vitreous lesions. Seventy-five percent of the images were randomly selected as the training set, and the remaining 25% were selected as the test set. The accuracy and parameters were recorded and compared among these seven different deep learning (DL) models. The precision, recall, F1 score, and area under the receiver operating characteristic curve (AUC) values for recognizing vitreous lesions were calculated for the most accurate DL model. RESULTS These seven DL models had significant differences in terms of their accuracy and parameters. GoogLeNet Inception V1 achieved the highest accuracy (95.5%) and minor parameters (10315580) in vitreous lesion recognition. GoogLeNet Inception V1 achieved precision values of 0.94, 0.94, 0.96, and 0.96, recall values of 0.94, 0.93, 0.97 and 0.98, and F1 scores of 0.94, 0.93, 0.96 and 0.97 for normal, VO, AH, and VH recognition, respectively. The AUC values for these four vitreous lesion types were 0.99, 1.0, 0.99, and 0.99, respectively. CONCLUSIONS GoogLeNet Inception V1 has shown promising results in ophthalmic ultrasound image recognition. With increasing ultrasound image data, a wide variety of confidential information on eye diseases can be detected automatically by artificial intelligence technology based on deep learning.
Collapse
Affiliation(s)
- Li Feng
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | | | - Wei Wei
- Hebei Eye Hospital, Xingtai, China
| | - Hui Qiu
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | - Mingyu Shi
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China.
| |
Collapse
|
11
|
Skevas C, de Olaguer NP, Lleó A, Thiwa D, Schroeter U, Lopes IV, Mautone L, Linke SJ, Spitzer MS, Yap D, Xiao D. Implementing and evaluating a fully functional AI-enabled model for chronic eye disease screening in a real clinical environment. BMC Ophthalmol 2024; 24:51. [PMID: 38302908 PMCID: PMC10832120 DOI: 10.1186/s12886-024-03306-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/16/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to increase the affordability and accessibility of eye disease screening, especially with the recent approval of AI-based diabetic retinopathy (DR) screening programs in several countries. METHODS This study investigated the performance, feasibility, and user experience of a seamless hardware and software solution for screening chronic eye diseases in a real-world clinical environment in Germany. The solution integrated AI grading for DR, age-related macular degeneration (AMD), and glaucoma, along with specialist auditing and patient referral decision. The study comprised several components: (1) evaluating the entire system solution from recruitment to eye image capture and AI grading for DR, AMD, and glaucoma; (2) comparing specialist's grading results with AI grading results; (3) gathering user feedback on the solution. RESULTS A total of 231 patients were recruited, and their consent forms were obtained. The sensitivity, specificity, and area under the curve for DR grading were 100.00%, 80.10%, and 90.00%, respectively. For AMD grading, the values were 90.91%, 78.79%, and 85.00%, and for glaucoma grading, the values were 93.26%, 76.76%, and 85.00%. The analysis of all false positive cases across the three diseases and their comparison with the final referral decisions revealed that only 17 patients were falsely referred among the 231 patients. The efficacy analysis of the system demonstrated the effectiveness of the AI grading process in the study's testing environment. Clinical staff involved in using the system provided positive feedback on the disease screening process, particularly praising the seamless workflow from patient registration to image transmission and obtaining the final result. Results from a questionnaire completed by 12 participants indicated that most found the system easy, quick, and highly satisfactory. The study also revealed room for improvement in the AMD model, suggesting the need to enhance its training data. Furthermore, the performance of the glaucoma model grading could be improved by incorporating additional measures such as intraocular pressure. CONCLUSIONS The implementation of the AI-based approach for screening three chronic eye diseases proved effective in real-world settings, earning positive feedback on the usability of the integrated platform from both the screening staff and auditors. The auditing function has proven valuable for obtaining efficient second opinions from experts, pointing to its potential for enhancing remote screening capabilities. TRIAL REGISTRATION Institutional Review Board of the Hamburg Medical Chamber (Ethik-Kommission der Ärztekammer Hamburg): 2021-10574-BO-ff.
Collapse
Affiliation(s)
- Christos Skevas
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | | | - Albert Lleó
- TeleMedC GmbH, Raboisen 32, 20095, Hamburg, Germany
| | - David Thiwa
- Department of Otorhinolaryngology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Ulrike Schroeter
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Inês Valente Lopes
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany.
| | - Luca Mautone
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Stephan J Linke
- Zentrum Sehestaerke, Martinistraße 64, 20251, Hamburg, Germany
| | - Martin Stephan Spitzer
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Daniel Yap
- TeleMedC Pty Ltd, 61 Ubi Avenue 1, #06-11 UBPoint, Singapore, 40894, Singapore
| | - Di Xiao
- TeleMedC Pty Ltd, Brisbane Technology Park, Level 2, 1 Westlink Court, Darra, QLD 4076, Australia
| |
Collapse
|
12
|
Kawasaki R. How Can Artificial Intelligence Be Implemented Effectively in Diabetic Retinopathy Screening in Japan? Medicina (Kaunas) 2024; 60:243. [PMID: 38399532 PMCID: PMC10890175 DOI: 10.3390/medicina60020243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 01/26/2024] [Accepted: 01/29/2024] [Indexed: 02/25/2024]
Abstract
Diabetic retinopathy (DR) is a major microvascular complication of diabetes, affecting a substantial portion of diabetic patients worldwide. Timely intervention is pivotal in mitigating the risk of blindness associated with DR, yet early detection remains a challenge due to the absence of early symptoms. Screening programs have emerged as a strategy to address this burden, and this paper delves into the role of artificial intelligence (AI) in advancing DR screening in Japan. There are two pathways for DR screening in Japan: a health screening pathway and a clinical referral path from physicians to ophthalmologists. AI technologies that realize automated image classification by applying deep learning are emerging. These technologies have exhibited substantial promise, achieving sensitivity and specificity levels exceeding 90% in prospective studies. Moreover, we introduce the potential of Generative AI and large language models (LLMs) to transform healthcare delivery, particularly in patient engagement, medical records, and decision support. Considering the use of AI in DR screening in Japan, we propose to follow a seven-step framework for systematic screening and emphasize the importance of integrating AI into a well-designed screening program. Automated scoring systems with AI enhance screening quality, but their effectiveness depends on their integration into the broader screening ecosystem. LLMs emerge as an important tool to fill gaps in the screening process, from personalized invitations to reporting results, facilitating a seamless and efficient system. However, it is essential to address concerns surrounding technical accuracy and governance before full-scale integration into the healthcare system. In conclusion, this review highlights the challenges in the current screening pathway and the potential for AI, particularly LLM, to revolutionize DR screening in Japan. The future direction will depend on leadership from ophthalmologists and stakeholders to address long-standing challenges in DR screening so that all people have access to accessible and effective screening.
Collapse
Affiliation(s)
- Ryo Kawasaki
- Division of Public Health, Department of Social Medicine, Graduate School of Medicine, Osaka University, Suita 565-0871, Japan;
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Suita 565-0871, Japan
| |
Collapse
|
13
|
Winder AJ, Stanley EA, Fiehler J, Forkert ND. Challenges and Potential of Artificial Intelligence in Neuroradiology. Clin Neuroradiol 2024:10.1007/s00062-024-01382-7. [PMID: 38285239 DOI: 10.1007/s00062-024-01382-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 01/03/2024] [Indexed: 01/30/2024]
Abstract
PURPOSE Artificial intelligence (AI) has emerged as a transformative force in medical research and is garnering increased attention in the public consciousness. This represents a critical time period in which medical researchers, healthcare providers, insurers, regulatory agencies, and patients are all developing and shaping their beliefs and policies regarding the use of AI in the healthcare sector. The successful deployment of AI will require support from all these groups. This commentary proposes that widespread support for medical AI must be driven by clear and transparent scientific reporting, beginning at the earliest stages of scientific research. METHODS A review of relevant guidelines and literature describing how scientific reporting plays a central role at key stages in the life cycle of an AI software product was conducted. To contextualize this principle within a specific medical domain, we discuss the current state of predictive tissue outcome modeling in acute ischemic stroke and the unique challenges presented therein. RESULTS AND CONCLUSION Translating AI methods from the research to the clinical domain is complicated by challenges related to model design and validation studies, medical product regulations, and healthcare providers' reservations regarding AI's efficacy and affordability. However, each of these limitations is also an opportunity for high-impact research that will help to accelerate the clinical adoption of state-of-the-art medical AI. In all cases, establishing and adhering to appropriate reporting standards is an important responsibility that is shared by all of the parties involved in the life cycle of a prospective AI software product.
Collapse
Affiliation(s)
- Anthony J Winder
- Department of Radiology, University of Calgary, Calgary, Canada.
- Hotchkiss Brain Institute, University of Calgary, Calgary, Canada.
| | - Emma Am Stanley
- Department of Radiology, University of Calgary, Calgary, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, Canada
- Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Canada
| | - Jens Fiehler
- Department of Diagnostic and Interventional Neuroradiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Nils D Forkert
- Department of Radiology, University of Calgary, Calgary, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
- Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Canada
- Department of Clinical Neuroscience, University of Calgary, Calgary, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Canada
| |
Collapse
|
14
|
Badge A, Chandankhede M, Gajbe U, Bankar NJ, Bandre GR. Employment of Small-Group Discussions to Ensure the Effective Delivery of Medical Education. Cureus 2024; 16:e52655. [PMID: 38380198 PMCID: PMC10877665 DOI: 10.7759/cureus.52655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 01/21/2024] [Indexed: 02/22/2024] Open
Abstract
The changing landscape of medical education has made small-group discussions crucial components. These sessions, including problem-based learning (PBL), case-based learning (CBL), and team-based learning (TBL), revolutionize learning by fostering active participation, critical thinking, and practical skills application. They bridge theory with practice, preparing future healthcare professionals for the dynamic challenges of modern healthcare. Despite their transformative potential, there are challenges in faculty preparation, resource allocation, and effective evaluation. The best practices include aligning discussions with curriculum goals, skilled facilitation, promoting active participation, and robust assessment strategies. Looking ahead, adapting to emerging health trends, ongoing research, and evolving healthcare demands will ensure the continued relevance and effectiveness of small-group discussions, shaping competent and adaptable healthcare providers equipped for the ever-evolving healthcare landscape.
Collapse
Affiliation(s)
- Ankit Badge
- Microbiology, Datta Meghe Medical College, Datta Meghe Institute of Higher Education and Research, Nagpur, IND
| | - Manju Chandankhede
- Biochemistry, Datta Meghe Medical College, Datta Meghe Institute of Higher Education and Research, Nagpur, IND
| | - Ujwal Gajbe
- Anatomy, Datta Meghe Medical College, Datta Meghe Institute of Higher Education and Research, Nagpur, IND
| | - Nandkishor J Bankar
- Microbiology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Gulshan R Bandre
- Microbiology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
15
|
Talcott KE, Valentim CCS, Perkins SW, Ren H, Manivannan N, Zhang Q, Bagherinia H, Lee G, Yu S, D'Souza N, Jarugula H, Patel K, Singh RP. Automated Detection of Abnormal Optical Coherence Tomography B-scans Using a Deep Learning Artificial Intelligence Neural Network Platform. Int Ophthalmol Clin 2024; 64:115-127. [PMID: 38146885 DOI: 10.1097/iio.0000000000000519] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
|
16
|
Than J, Sim PY, Muttuvelu D, Ferraz D, Koh V, Kang S, Huemer J. Teleophthalmology and retina: a review of current tools, pathways and services. Int J Retina Vitreous 2023; 9:76. [PMID: 38053188 DOI: 10.1186/s40942-023-00502-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 10/02/2023] [Indexed: 12/07/2023] Open
Abstract
Telemedicine, the use of telecommunication and information technology to deliver healthcare remotely, has evolved beyond recognition since its inception in the 1970s. Advances in telecommunication infrastructure, the advent of the Internet, exponential growth in computing power and associated computer-aided diagnosis, and medical imaging developments have created an environment where telemedicine is more accessible and capable than ever before, particularly in the field of ophthalmology. Ever-increasing global demand for ophthalmic services due to population growth and ageing together with insufficient supply of ophthalmologists requires new models of healthcare provision integrating telemedicine to meet present day challenges, with the recent COVID-19 pandemic providing the catalyst for the widespread adoption and acceptance of teleophthalmology. In this review we discuss the history, present and future application of telemedicine within the field of ophthalmology, and specifically retinal disease. We consider the strengths and limitations of teleophthalmology, its role in screening, community and hospital management of retinal disease, patient and clinician attitudes, and barriers to its adoption.
Collapse
Affiliation(s)
- Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Peng Y Sim
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Danson Muttuvelu
- Department of Ophthalmology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- MitØje ApS/Danske Speciallaeger Aps, Aarhus, Denmark
| | - Daniel Ferraz
- D'Or Institute for Research and Education (IDOR), São Paulo, Brazil
- Institute of Ophthalmology, University College London, London, UK
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Josef Huemer
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK.
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria.
| |
Collapse
|
17
|
Chen JS, Lin MC, Yiu G, Thorne C, Kulasa K, Stewart J, Nudleman E, Freeby M, Han MA, Baxter SL. Barriers to Implementation of Teleretinal Diabetic Retinopathy Screening Programs Across the University of California. Telemed J E Health 2023; 29:1810-1818. [PMID: 37256712 PMCID: PMC10714257 DOI: 10.1089/tmj.2022.0489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/17/2022] [Accepted: 12/19/2022] [Indexed: 06/02/2023] Open
Abstract
Aim: To describe barriers to implementation of diabetic retinopathy (DR) teleretinal screening programs and artificial intelligence (AI) integration at the University of California (UC). Methods: Institutional representatives from UC Los Angeles, San Diego, San Francisco, Irvine, and Davis were surveyed for the year of their program's initiation, active status at the time of survey (December 2021), number of primary care clinics involved, screening image quality, types of eye providers, image interpretation turnaround time, and billing codes used. Representatives were asked to rate perceptions toward barriers to teleretinal DR screening and AI implementation using a 5-point Likert scale. Results: Four UC campuses had active DR teleretinal screening programs at the time of survey and screened between 246 and 2,123 patients at 1-6 clinics per campus. Sites reported variation between poor-quality photos (<5% to 15%) and average image interpretation time (1-5 days). Patient education, resource availability, and infrastructural support were identified as barriers to DR teleretinal screening. Cost and integration into existing technology infrastructures were identified as barriers to AI integration in DR screening. Conclusions: Despite the potential to increase access to care, there remain several barriers to widespread implementation of DR teleretinal screening. More research is needed to develop best practices to overcome these barriers.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California, USA
| | - Mark C. Lin
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California, USA
| | - Glenn Yiu
- Department of Ophthalmology and Vision Science, University of California Davis Health, Sacramento, California, USA
| | - Christine Thorne
- Department of Family Medicine and Public Health, University of California San Diego, La Jolla, California, USA
| | - Kristen Kulasa
- Department of Endocrinology, University of California San Diego, La Jolla, California, USA
| | - Jay Stewart
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California, USA
- Department of Ophthalmology, Zuckerberg San Francisco General Hospital and Trauma Center, San Francisco, California, USA
| | - Eric Nudleman
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California, USA
| | - Matthew Freeby
- Department of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Maria A. Han
- Department of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California, USA
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California, USA
| |
Collapse
|
18
|
Dow ER, Khan NC, Chen KM, Mishra K, Perera C, Narala R, Basina M, Dang J, Kim M, Levine M, Phadke A, Tan M, Weng K, Do DV, Moshfeghi DM, Mahajan VB, Mruthyunjaya P, Leng T, Myung D. AI-Human Hybrid Workflow Enhances Teleophthalmology for the Detection of Diabetic Retinopathy. Ophthalmol Sci 2023; 3:100330. [PMID: 37449051 PMCID: PMC10336195 DOI: 10.1016/j.xops.2023.100330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 05/04/2023] [Accepted: 05/08/2023] [Indexed: 07/18/2023]
Abstract
Objective Detection of diabetic retinopathy (DR) outside of specialized eye care settings is an important means of access to vision-preserving health maintenance. Remote interpretation of fundus photographs acquired in a primary care or other nonophthalmic setting in a store-and-forward manner is a predominant paradigm of teleophthalmology screening programs. Artificial intelligence (AI)-based image interpretation offers an alternative means of DR detection. IDx-DR (Digital Diagnostics Inc) is a Food and Drug Administration-authorized autonomous testing device for DR. We evaluated the diagnostic performance of IDx-DR compared with human-based teleophthalmology over 2 and a half years. Additionally, we evaluated an AI-human hybrid workflow that combines AI-system evaluation with human expert-based assessment for referable cases. Design Prospective cohort study and retrospective analysis. Participants Diabetic patients ≥ 18 years old without a prior DR diagnosis or DR examination in the past year presenting for routine DR screening in a primary care clinic. Methods Macula-centered and optic nerve-centered fundus photographs were evaluated by an AI algorithm followed by consensus-based overreading by retina specialists at the Stanford Ophthalmic Reading Center. Detection of more-than-mild diabetic retinopathy (MTMDR) was compared with in-person examination by a retina specialist. Main Outcome Measures Sensitivity, specificity, accuracy, positive predictive value, and gradability achieved by the AI algorithm and retina specialists. Results The AI algorithm had higher sensitivity (95.5% sensitivity; 95% confidence interval [CI], 86.7%-100%) but lower specificity (60.3% specificity; 95% CI, 47.7%-72.9%) for detection of MTMDR compared with remote image interpretation by retina specialists (69.5% sensitivity; 95% CI, 50.7%-88.3%; 96.9% specificity; 95% CI, 93.5%-100%). Gradability of encounters was also lower for the AI algorithm (62.5%) compared with retina specialists (93.1%). A 2-step AI-human hybrid workflow in which the AI algorithm initially rendered an assessment followed by overread by a retina specialist of MTMDR-positive encounters resulted in a sensitivity of 95.5% (95% CI, 86.7%-100%) and a specificity of 98.2% (95% CI, 94.6%-100%). Similarly, a 2-step overread by retina specialists of AI-ungradable encounters improved gradability from 63.5% to 95.6% of encounters. Conclusions Implementation of an AI-human hybrid teleophthalmology workflow may both decrease reliance on human specialist effort and improve diagnostic accuracy. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Eliot R. Dow
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| | - Nergis C. Khan
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Kapil Mishra
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Ramsudha Narala
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Marina Basina
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Jimmy Dang
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Michael Kim
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Marcie Levine
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Anuradha Phadke
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Marilyn Tan
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Kirsti Weng
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Diana V. Do
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Darius M. Moshfeghi
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - David Myung
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| |
Collapse
|
19
|
Osborne D, Steele A, Evans M, Ellis H, Pancholi R, Harding T, Dee J, Leary R, Bradshaw J, O'Flynn E, Self JE. Children's visual acuity tests without professional supervision: a prospective repeated measures study. Eye (Lond) 2023; 37:3762-3767. [PMID: 37328509 PMCID: PMC10697985 DOI: 10.1038/s41433-023-02597-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 04/11/2023] [Accepted: 05/19/2023] [Indexed: 06/18/2023] Open
Abstract
BACKGROUND Home visual acuity tests could ease pressure on ophthalmic services by facilitating remote review of patients. Home tests may have further utility in giving service users frequent updates of vision outcomes during therapy, identifying vision problems in an asymptomatic population, and engaging stakeholders in therapy. METHODS Children attending outpatient clinics had visual acuity measured 3 times at the same appointment: Once by a registered orthoptist per clinical protocols, once by an orthoptist using a tablet-based visual acuity test (iSight Test Pro, Kay Pictures), and once by an unsupervised parent/carer using the tablet-based test. RESULTS In total, 42 children were recruited to the study. The mean age was 5.6 years (range 3.3 to 9.3 years). Median and interquartile ranges (IQR) for clinical standard, orthoptic-led and parent/carer-led iSight Test Pro visual acuity measurements were 0.155 (0.18 IQR), 0.180 (0.26 IQR), and 0.300 (0.33 IQR) logMAR respectively. The iSight Test Pro in the hands of parents/carers was significantly different from the standard of care measurements (P = 0.008). In the hands of orthoptists. There was no significant difference between orthoptists using the iSight Test Pro and standard of care (P = 0.289), nor between orthoptist iSight Test Pro and parents/carer iSight Test Pro measurements (P = 0.108). CONCLUSION This technique of unsupervised visual acuity measures for children is not comparable to clinical measures and is unlikely to be valuable to clinical decision making. Future work should focus on improving the accuracy of the test through better training, equipment/software or supervision/support.
Collapse
Affiliation(s)
- Daniel Osborne
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK.
- University of Southampton, Faculty of Medicine, Southampton, UK.
| | - Aimee Steele
- University of Southampton, Faculty of Medicine, Southampton, UK
| | - Megan Evans
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
| | - Helen Ellis
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
| | - Roshni Pancholi
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
| | - Tomos Harding
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
| | - Jessica Dee
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
| | - Rachel Leary
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
| | - Jeremy Bradshaw
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
| | - Elizabeth O'Flynn
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
| | - Jay E Self
- University Hospital Southampton NHS Foundation Trust, Department of Ophthalmology, Southampton, UK
- University of Southampton, Faculty of Medicine, Southampton, UK
| |
Collapse
|
20
|
Winkelman J, Nguyen D, vanSonnenberg E, Kirk A, Lieberman S. Artificial Intelligence (AI) in pediatric endocrinology. J Pediatr Endocrinol Metab 2023; 36:903-908. [PMID: 37589444 DOI: 10.1515/jpem-2023-0287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 08/03/2023] [Indexed: 08/18/2023]
Abstract
Artificial Intelligence (AI) is integrating itself throughout the medical community. AI's ability to analyze complex patterns and interpret large amounts of data will have considerable impact on all areas of medicine, including pediatric endocrinology. In this paper, we review and update the current studies of AI in pediatric endocrinology. Specific topics that are addressed include: diabetes management, bone growth, metabolism, obesity, and puberty. Becoming knowledgeable and comfortable with AI will assist pediatric endocrinologists, the goal of the paper.
Collapse
Affiliation(s)
| | - Diep Nguyen
- University of Arizona College of Medicine Phoenix, Phoenix, USA
| | - Eric vanSonnenberg
- University of Arizona College of Medicine Phoenix, Phoenix, USA
- From the Departments of Radiology, University of Arizona College of Medicine Phoenix, Phoenix, USA
- Student Affairs, University of Arizona College of Medicine Phoenix, Phoenix, USA
| | - Alison Kirk
- University of Arizona College of Medicine Phoenix, Phoenix, USA
- Student Affairs, University of Arizona College of Medicine Phoenix, Phoenix, USA
- Pediatrics, University of Arizona College of Medicine Phoenix, Phoenix, USA
| | - Steven Lieberman
- University of Arizona College of Medicine Phoenix, Phoenix, USA
- Internal Medicine (Division of Endocrinology), University of Arizona College of Medicine Phoenix, Phoenix, USA
| |
Collapse
|
21
|
Clark KK, Gutierrez J, Cody JR, Padilla BI. Implementation of Diabetic Retinopathy Screening in Adult Patients With Type 2 Diabetes in a Primary Care Setting. Clin Diabetes 2023; 42:223-231. [PMID: 38694241 PMCID: PMC11060615 DOI: 10.2337/cd23-0032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/04/2024]
Abstract
Diabetic retinopathy (DR) is a microvascular complication of type 2 diabetes and the leading cause of blindness globally. Although diabetes-related eye exams are widely recognized as an effective method for early detection of DR, which can help to prevent eventual vision loss, adherence to screening exams in the United States is suboptimal. This article describes a quality improvement project to increase DR screening rates and increase knowledge and awareness of DR in adults with type 2 diabetes in a primary care setting using mobile DR screening units. This project addressed gaps of care and demonstrated that primary care settings can increase access to DR screening through a patient-centered process and thereby help to prevent irreversible outcomes of DR and improve quality of life.
Collapse
|
22
|
van Breugel M, Fehrmann RSN, Bügel M, Rezwan FI, Holloway JW, Nawijn MC, Fontanella S, Custovic A, Koppelman GH. Current state and prospects of artificial intelligence in allergy. Allergy 2023; 78:2623-2643. [PMID: 37584170 DOI: 10.1111/all.15849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/08/2023] [Accepted: 07/31/2023] [Indexed: 08/17/2023]
Abstract
The field of medicine is witnessing an exponential growth of interest in artificial intelligence (AI), which enables new research questions and the analysis of larger and new types of data. Nevertheless, applications that go beyond proof of concepts and deliver clinical value remain rare, especially in the field of allergy. This narrative review provides a fundamental understanding of the core concepts of AI and critically discusses its limitations and open challenges, such as data availability and bias, along with potential directions to surmount them. We provide a conceptual framework to structure AI applications within this field and discuss forefront case examples. Most of these applications of AI and machine learning in allergy concern supervised learning and unsupervised clustering, with a strong emphasis on diagnosis and subtyping. A perspective is shared on guidelines for good AI practice to guide readers in applying it effectively and safely, along with prospects of field advancement and initiatives to increase clinical impact. We anticipate that AI can further deepen our knowledge of disease mechanisms and contribute to precision medicine in allergy.
Collapse
Affiliation(s)
- Merlijn van Breugel
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- MIcompany, Amsterdam, the Netherlands
| | - Rudolf S N Fehrmann
- Department of Medical Oncology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | | | - Faisal I Rezwan
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| | - John W Holloway
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- National Institute for Health and Care Research Southampton Biomedical Research Centre, University Hospitals Southampton NHS Foundation Trust, Southampton, UK
| | - Martijn C Nawijn
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Department of Pathology and Medical Biology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Sara Fontanella
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Adnan Custovic
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Gerard H Koppelman
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
23
|
Le JP, Shashikumar SP, Malhotra A, Nemati S, Wardi G. Making the Improbable Possible: Generalizing Models Designed for a Syndrome-Based, Heterogeneous Patient Landscape. Crit Care Clin 2023; 39:751-768. [PMID: 37704338 PMCID: PMC10758922 DOI: 10.1016/j.ccc.2023.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
Syndromic conditions, such as sepsis, are commonly encountered in the intensive care unit. Although these conditions are easy for clinicians to grasp, these conditions may limit the performance of machine-learning algorithms. Individual hospital practice patterns may limit external generalizability. Data missingness is another barrier to optimal algorithm performance and various strategies exist to mitigate this. Recent advances in data science, such as transfer learning, conformal prediction, and continual learning, may improve generalizability of machine-learning algorithms in critically ill patients. Randomized trials with these approaches are indicated to demonstrate improvements in patient-centered outcomes at this point.
Collapse
Affiliation(s)
- Joshua Pei Le
- School of Medicine, University of Limerick, Castletroy, Co, Limerick V94 T9PX, Ireland
| | | | - Atul Malhotra
- Division of Pulmonary, Critical Care and Sleep Medicine, University of California San Diego, San Diego, CA, USA
| | - Shamim Nemati
- Division of Biomedical Informatics, University of California San Diego, San Diego, CA, USA
| | - Gabriel Wardi
- Division of Pulmonary, Critical Care and Sleep Medicine, University of California San Diego, San Diego, CA, USA; Department of Emergency Medicine, University of California San Diego, 200 W Arbor Drive, San Diego, CA 92103, USA.
| |
Collapse
|
24
|
Okita Y, Hirano T, Wang B, Nakashima Y, Minoda S, Nagahara H, Kumanogoh A. Automatic evaluation of atlantoaxial subluxation in rheumatoid arthritis by a deep learning model. Arthritis Res Ther 2023; 25:181. [PMID: 37749583 PMCID: PMC10518918 DOI: 10.1186/s13075-023-03172-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 09/13/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND This work aims to develop a deep learning model, assessing atlantoaxial subluxation (AAS) in rheumatoid arthritis (RA), which can often be ambiguous in clinical practice. METHODS We collected 4691 X-ray images of the cervical spine of the 906 patients with RA. Among these images, 3480 were used for training the deep learning model, 803 were used for validating the model during the training process, and the remaining 408 were used for testing the performance of the trained model. The two-dimensional key points' detection model of Deep High-Resolution Representation Learning for Human Pose Estimation was adopted as the base convolutional neural network model. The model inferred four coordinates to calculate the atlantodental interval (ADI) and space available for the spinal cord (SAC). Finally, these values were compared with those by clinicians to evaluate the performance of the model. RESULTS Among the 408 cervical images for testing the performance, the trained model correctly identified the four coordinates in 99.5% of the dataset. The values of ADI and SAC were positively correlated among the model and two clinicians. The sensitivity of AAS diagnosis with ADI or SAC by the model was 0.86 and 0.97 respectively. The specificity of that was 0.57 and 0.5 respectively. CONCLUSIONS We present the development of a deep learning model for the evaluation of cervical lesions of patients with RA. The model was demonstrably shown to be useful for quantitative evaluation.
Collapse
Affiliation(s)
- Yasutaka Okita
- Department of Respiratory Medicine and Clinical Immunology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Toru Hirano
- Department of Rheumatology, Nishinomiya Municipal Central Hospital, Hyogo, Japan
| | - Bowen Wang
- Osaka University Institute for Datability Science (IDS), Suita, Osaka, Japan
| | - Yuta Nakashima
- Osaka University Institute for Datability Science (IDS), Suita, Osaka, Japan
| | - Saki Minoda
- Department of Respiratory Medicine and Clinical Immunology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Hajime Nagahara
- Osaka University Institute for Datability Science (IDS), Suita, Osaka, Japan
| | - Atsushi Kumanogoh
- Department of Respiratory Medicine and Clinical Immunology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
- Laboratory of Immunopathology, World Premier International Immunology Frontier Research Center, Osaka University, Suita, Osaka, Japan
- The Institute for Open and Transdisciplinary Research Initiatives (OTRI), Osaka, Japan
| |
Collapse
|
25
|
Zhelev Z, Peters J, Rogers M, Allen M, Kijauskaite G, Seedat F, Wilkinson E, Hyde C. Test accuracy of artificial intelligence-based grading of fundus images in diabetic retinopathy screening: A systematic review. J Med Screen 2023; 30:97-112. [PMID: 36617971 PMCID: PMC10399100 DOI: 10.1177/09691413221144382] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 11/14/2022] [Accepted: 11/18/2022] [Indexed: 01/10/2023]
Abstract
OBJECTIVES To systematically review the accuracy of artificial intelligence (AI)-based systems for grading of fundus images in diabetic retinopathy (DR) screening. METHODS We searched MEDLINE, EMBASE, the Cochrane Library and the ClinicalTrials.gov from 1st January 2000 to 27th August 2021. Accuracy studies published in English were included if they met the pre-specified inclusion criteria. Selection of studies for inclusion, data extraction and quality assessment were conducted by one author with a second reviewer independently screening and checking 20% of titles. Results were analysed narratively. RESULTS Forty-three studies evaluating 15 deep learning (DL) and 4 machine learning (ML) systems were included. Nine systems were evaluated in a single study each. Most studies were judged to be at high or unclear risk of bias in at least one QUADAS-2 domain. Sensitivity for referable DR and higher grades was ≥85% while specificity varied and was <80% for all ML systems and in 6/31 studies evaluating DL systems. Studies reported high accuracy for detection of ungradable images, but the latter were analysed and reported inconsistently. Seven studies reported that AI was more sensitive but less specific than human graders. CONCLUSIONS AI-based systems are more sensitive than human graders and could be safe to use in clinical practice but have variable specificity. However, for many systems evidence is limited, at high risk of bias and may not generalise across settings. Therefore, pre-implementation assessment in the target clinical pathway is essential to obtain reliable and applicable accuracy estimates.
Collapse
Affiliation(s)
- Zhivko Zhelev
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Jaime Peters
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Morwenna Rogers
- NIHR ARC South West Peninsula (PenARC), University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Michael Allen
- University of Exeter Medical School, University of Exeter, Exeter, UK
| | | | | | | | - Christopher Hyde
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| |
Collapse
|
26
|
Chou YB, Kale AU, Lanzetta P, Aslam T, Barratt J, Danese C, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. Current status and practical considerations of artificial intelligence use in screening and diagnosing retinal diseases: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:403-413. [PMID: 37326222 PMCID: PMC10399944 DOI: 10.1097/icu.0000000000000979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) technologies in screening and diagnosing retinal diseases may play an important role in telemedicine and has potential to shape modern healthcare ecosystems, including within ophthalmology. RECENT FINDINGS In this article, we examine the latest publications relevant to AI in retinal disease and discuss the currently available algorithms. We summarize four key requirements underlining the successful application of AI algorithms in real-world practice: processing massive data; practicability of an AI model in ophthalmology; policy compliance and the regulatory environment; and balancing profit and cost when developing and maintaining AI models. SUMMARY The Vision Academy recognizes the advantages and disadvantages of AI-based technologies and gives insightful recommendations for future directions.
Collapse
Affiliation(s)
- Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Carla Danese
- Department of Medicine – Ophthalmology, University of Udine
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
27
|
Lee T, Rivera A, Brune M, Kundu A, Haystead A, Winslow L, Kundu R, Wisely CE, Robbins CB, Henao R, Grewal DS, Fekrat S. Convolutional Neural Network-Based Automated Quality Assessment of OCT and OCT Angiography Image Maps in Individuals With Neurodegenerative Disease. Transl Vis Sci Technol 2023; 12:30. [PMID: 37389540 PMCID: PMC10318591 DOI: 10.1167/tvst.12.6.30] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 06/04/2023] [Indexed: 07/01/2023] Open
Abstract
Purpose To train and test convolutional neural networks (CNNs) to automate quality assessment of optical coherence tomography (OCT) and OCT angiography (OCTA) images in patients with neurodegenerative disease. Methods Patients with neurodegenerative disease were enrolled in the Duke Eye Multimodal Imaging in Neurodegenerative Disease Study. Image inputs were ganglion cell-inner plexiform layer (GC-IPL) thickness maps and fovea-centered 6-mm × 6-mm OCTA scans of the superficial capillary plexus (SCP). Two trained graders manually labeled all images for quality (good versus poor). Interrater reliability (IRR) of manual quality assessment was calculated for a subset of each image type. Images were split into train, validation, and test sets in a 70%/15%/15% split. An AlexNet-based CNN was trained using these labels and evaluated with area under the receiver operating characteristic (AUC) and summaries of the confusion matrix. Results A total of 1465 GC-IPL thickness maps (1217 good and 248 poor quality) and 2689 OCTA scans of the SCP (1797 good and 892 poor quality) served as model inputs. The IRR of quality assessment agreement by two graders was 97% and 90% for the GC-IPL maps and OCTA scans, respectively. The AlexNet-based CNNs trained to assess quality of the GC-IPL images and OCTA scans achieved AUCs of 0.990 and 0.832, respectively. Conclusions CNNs can be trained to accurately differentiate good- from poor-quality GC-IPL thickness maps and OCTA scans of the macular SCP. Translational Relevance Since good-quality retinal images are critical for the accurate assessment of microvasculature and structure, incorporating an automated image quality sorter may obviate the need for manual image review.
Collapse
Affiliation(s)
- Terry Lee
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Alexandra Rivera
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - Matthew Brune
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - Anita Kundu
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Alice Haystead
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - Lauren Winslow
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - Raj Kundu
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Pratt School of Engineering, Duke University, Durham, NC, USA
| | - C. Ellis Wisely
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Cason B. Robbins
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Ricardo Henao
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
- Department of Biostatistics & Bioinformatics, Duke University, Durham, NC, USA
| | - Dilraj S. Grewal
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Sharon Fekrat
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
28
|
Abstract
Artificial intelligence (AI) is an emerging technology in healthcare and holds the potential to disrupt many arms in medical care. In particular, disciplines using medical imaging modalities, including e.g. radiology but ophthalmology as well, are already confronted with a wide variety of AI implications. In ophthalmologic research, AI has demonstrated promising results limited to specific diseases and imaging tools, respectively. Yet, implementation of AI in clinical routine is not widely spread due to availability, heterogeneity in imaging techniques and AI methods. In order to describe the status quo, this narrational review provides a brief introduction to AI ("what the ophthalmologist needs to know"), followed by an overview of different AI-based applications in ophthalmology and a discussion on future challenges.Abbreviations: Age-related macular degeneration, AMD; Artificial intelligence, AI; Anterior segment OCT, AS-OCT; Coronary artery calcium score, CACS; Convolutional neural network, CNN; Deep convolutional neural network, DCNN; Diabetic retinopathy, DR; Machine learning, ML; Optical coherence tomography, OCT; Retinopathy of prematurity, ROP; Support vector machine, SVM; Thyroid-associated ophthalmopathy, TAO.
Collapse
Affiliation(s)
| | - Robert P Reimer
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Alexander C Rokohl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Liliana Caldeira
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Ludwig M Heindl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Nils Große Hokamp
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| |
Collapse
|
29
|
Wen J, Liu D, Wu Q, Zhao L, Iao WC, Lin H. Retinal image‐based artificial intelligence in detecting and predicting kidney diseases: Current advances and future perspectives. VIEW 2023. [DOI: 10.1002/viw.20220070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023] Open
Affiliation(s)
- Jingyi Wen
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Dong Liu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Qianni Wu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Lanqin Zhao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Wai Cheng Iao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Haotian Lin
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics Zhongshan School of Medicine Sun Yat‐sen University Guangzhou China
| |
Collapse
|
30
|
Grzybowski A, Singhanetr P, Nanegrungsunk O, Ruamviboonsuk P. Artificial Intelligence for Diabetic Retinopathy Screening Using Color Retinal Photographs: From Development to Deployment. Ophthalmol Ther 2023. [PMID: 36862308 DOI: 10.1007/s40123-023-00691-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 02/14/2023] [Indexed: 03/03/2023] Open
Abstract
Diabetic retinopathy (DR), a leading cause of preventable blindness, is expected to remain a growing health burden worldwide. Screening to detect early sight-threatening lesions of DR can reduce the burden of vision loss; nevertheless, the process requires intensive manual labor and extensive resources to accommodate the increasing number of patients with diabetes. Artificial intelligence (AI) has been shown to be an effective tool which can potentially lower the burden of screening DR and vision loss. In this article, we review the use of AI for DR screening on color retinal photographs in different phases of application, ranging from development to deployment. Early studies of machine learning (ML)-based algorithms using feature extraction to detect DR achieved a high sensitivity but relatively lower specificity. Robust sensitivity and specificity were achieved with the application of deep learning (DL), although ML is still used in some tasks. Public datasets were utilized in retrospective validations of the developmental phases in most algorithms, which require a large number of photographs. Large prospective clinical validation studies led to the approval of DL for autonomous screening of DR although the semi-autonomous approach may be preferable in some real-world settings. There have been few reports on real-world implementations of DL for DR screening. It is possible that AI may improve some real-world indicators for eye care in DR, such as increased screening uptake and referral adherence, but this has not been proven. The challenges in deployment may include workflow issues, such as mydriasis to lower ungradable cases; technical issues, such as integration into electronic health record systems and integration into existing camera systems; ethical issues, such as data privacy and security; acceptance of personnel and patients; and health-economic issues, such as the need to conduct health economic evaluations of using AI in the context of the country. The deployment of AI for DR screening should follow the governance model for AI in healthcare which outlines four main components: fairness, transparency, trustworthiness, and accountability.
Collapse
|
31
|
Iao WC, Zhang W, Wang X, Wu Y, Lin D, Lin H. Deep Learning Algorithms for Screening and Diagnosis of Systemic Diseases Based on Ophthalmic Manifestations: A Systematic Review. Diagnostics (Basel) 2023; 13:diagnostics13050900. [PMID: 36900043 PMCID: PMC10001234 DOI: 10.3390/diagnostics13050900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 03/06/2023] Open
Abstract
Deep learning (DL) is the new high-profile technology in medical artificial intelligence (AI) for building screening and diagnosing algorithms for various diseases. The eye provides a window for observing neurovascular pathophysiological changes. Previous studies have proposed that ocular manifestations indicate systemic conditions, revealing a new route in disease screening and management. There have been multiple DL models developed for identifying systemic diseases based on ocular data. However, the methods and results varied immensely across studies. This systematic review aims to summarize the existing studies and provide an overview of the present and future aspects of DL-based algorithms for screening systemic diseases based on ophthalmic examinations. We performed a thorough search in PubMed®, Embase, and Web of Science for English-language articles published until August 2022. Among the 2873 articles collected, 62 were included for analysis and quality assessment. The selected studies mainly utilized eye appearance, retinal data, and eye movements as model input and covered a wide range of systemic diseases such as cardiovascular diseases, neurodegenerative diseases, and systemic health features. Despite the decent performance reported, most models lack disease specificity and public generalizability for real-world application. This review concludes the pros and cons and discusses the prospect of implementing AI based on ocular data in real-world clinical scenarios.
Collapse
Affiliation(s)
- Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Weixing Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Xun Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510060, China
- Correspondence:
| |
Collapse
|
32
|
Khalili Pour E, Rezaee K, Azimi H, Mirshahvalad SM, Jafari B, Fadakar K, Faghihi H, Mirshahi A, Ghassemi F, Ebrahimiadib N, Mirghorbani M, Bazvand F, Riazi-Esfahani H, Riazi Esfahani M. Automated machine learning-based classification of proliferative and non-proliferative diabetic retinopathy using optical coherence tomography angiography vascular density maps. Graefes Arch Clin Exp Ophthalmol 2023; 261:391-399. [PMID: 36050474 DOI: 10.1007/s00417-022-05818-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/07/2022] [Accepted: 08/23/2022] [Indexed: 01/17/2023] Open
Abstract
PURPOSE The study aims to classify the eyes with proliferative diabetic retinopathy (PDR) and non-proliferative diabetic retinopathy (NPDR) based on the optical coherence tomography angiography (OCTA) vascular density maps using a supervised machine learning algorithm. METHODS OCTA vascular density maps (at superficial capillary plexus (SCP), deep capillary plexus (DCP), and total retina (R) levels) of 148 eyes from 78 patients with diabetic retinopathy (45 PDR and 103 NPDR) was used to classify the images to NPDR and PDR groups based on a supervised machine learning algorithm known as the support vector machine (SVM) classifier optimized by a genetic evolutionary algorithm. RESULTS The implemented algorithm in three different models reached up to 85% accuracy in classifying PDR and NPDR in all three levels of vascular density maps. The deep retinal layer vascular density map demonstrated the best performance with a 90% accuracy in discriminating between PDR and NPDR. CONCLUSIONS The current study on a limited number of patients with diabetic retinopathy demonstrated that a supervised machine learning-based method known as SVM can be used to differentiate PDR and NPDR patients using OCTA vascular density maps.
Collapse
Affiliation(s)
- Elias Khalili Pour
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran
| | - Hossein Azimi
- Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| | - Seyed Mohammad Mirshahvalad
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Behzad Jafari
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Kaveh Fadakar
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Hooshang Faghihi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Ahmad Mirshahi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Fariba Ghassemi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Nazanin Ebrahimiadib
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Masoud Mirghorbani
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Fatemeh Bazvand
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Hamid Riazi-Esfahani
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran.
| | - Mohammad Riazi Esfahani
- Department of Ophthalmology, Gavin Herbert Eye Institute, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
33
|
Boubacar Goga A. Artificial Intelligence at the Service of Medical Imaging in the Detection of Breast Tumors. ARTIF INTELL 2023. [DOI: 10.5772/intechopen.108739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Artificial intelligence is currently capable of imitating clinical reasoning in order to make a diagnosis, in particular that of breast cancer. This is possible, thanks to the exponential increase in medical images. Indeed, artificial intelligence systems are used to assist doctors and not replace them. Breast cancer is a cancerous tumor that can invade and destroy nearby tissue. Therefore, early and reliable detection of this disease is a great asset for the medical field. Some people use medical imaging techniques to diagnose this disease. Given the drawbacks of these techniques, diagnostic errors of doctors related to fatigue or inexperience, this work consists of showing how artificial intelligence methods, in particular artificial neural networks (ANN), deep learning (DL), support vector machines (SVM), expert systems, fuzzy logic can be applied on breast imaging, with the aim of improving the detection of this global scourge. Finally, the proposed system is composed of two (2) essential steps: the tumor detection phase and the diagnostic phase allowing the latter to decide whether the tumor is benign or malignant.
Collapse
|
34
|
De A. Statistical Considerations and Challenges for Pivotal Clinical Studies of Artificial Intelligence Medical Tests for Widespread Use: Opportunities for Inter-Disciplinary Collaboration. Stat Biopharm Res 2023. [DOI: 10.1080/19466315.2023.2169752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Affiliation(s)
- Arkendra De
- Agilent Technologies, 1005 Mark Avenue, Carpinteria, CA 93013, Tel: 408-553-7111,
| |
Collapse
|
35
|
Abstract
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.
Collapse
Affiliation(s)
- Joshua Hatherley
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria, Australia
| | - Robert Sparrow
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria, Australia
| | - Mark Howard
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
36
|
Wang H, Meng X, Tang Q, Hao Y, Luo Y, Li J. Development and Application of a Standardized Testset for an Artificial Intelligence Medical Device Intended for the Computer-Aided Diagnosis of Diabetic Retinopathy. J Healthc Eng 2023; 2023:7139560. [PMID: 36818382 DOI: 10.1155/2023/7139560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/21/2022] [Accepted: 11/24/2022] [Indexed: 02/10/2023]
Abstract
Objective To explore a centralized approach to build test sets and assess the performance of an artificial intelligence medical device (AIMD) which is intended for computer-aided diagnosis of diabetic retinopathy (DR). Method A framework was proposed to conduct data collection, data curation, and annotation. Deidentified colour fundus photographs were collected from 11 partner hospitals with raw labels. Photographs with sensitive information or authenticity issues were excluded during vetting. A team of annotators was recruited through qualification examinations and trained. The annotation process included three steps: initial annotation, review, and arbitration. The annotated data then composed a standardized test set, which was further imported to algorithms under test (AUT) from different developers. The algorithm outputs were compared with the final annotation results (reference standard). Result The test set consists of 6327 digital colour fundus photographs. The final labels include 5 stages of DR and non-DR, as well as other ocular diseases and photographs with unacceptable quality. The Fleiss Kappa was 0.75 among the annotators. The Cohen's kappa between raw labels and final labels is 0.5. Using this test set, five AUTs were tested and compared quantitatively. The metrics include accuracy, sensitivity, and specificity. The AUTs showed inhomogeneous capabilities to classify different types of fundus photographs. Conclusions This article demonstrated a workflow to build standardized test sets and conduct algorithm testing of the AIMD for computer-aided diagnosis of diabetic retinopathy. It may provide a reference to develop technical standards that promote product verification and quality control, improving the comparability of products.
Collapse
|
37
|
Hatherley J, Sparrow R, Howard M. The Virtues of Interpretable Medical Artificial Intelligence. Camb Q Healthc Ethics 2022:1-10. [PMID: 36524245 DOI: 10.1017/s0963180122000305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this article, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.
Collapse
Affiliation(s)
- Joshua Hatherley
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| | - Robert Sparrow
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| | - Mark Howard
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| |
Collapse
|
38
|
Chomutare T, Tejedor M, Svenning TO, Marco-Ruiz L, Tayefi M, Lind K, Godtliebsen F, Moen A, Ismail L, Makhlysheva A, Ngo PD. Artificial Intelligence Implementation in Healthcare: A Theory-Based Scoping Review of Barriers and Facilitators. Int J Environ Res Public Health 2022; 19:ijerph192316359. [PMID: 36498432 PMCID: PMC9738234 DOI: 10.3390/ijerph192316359] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/01/2022] [Accepted: 12/02/2022] [Indexed: 05/09/2023]
Abstract
There is a large proliferation of complex data-driven artificial intelligence (AI) applications in many aspects of our daily lives, but their implementation in healthcare is still limited. This scoping review takes a theoretical approach to examine the barriers and facilitators based on empirical data from existing implementations. We searched the major databases of relevant scientific publications for articles related to AI in clinical settings, published between 2015 and 2021. Based on the theoretical constructs of the Consolidated Framework for Implementation Research (CFIR), we used a deductive, followed by an inductive, approach to extract facilitators and barriers. After screening 2784 studies, 19 studies were included in this review. Most of the cited facilitators were related to engagement with and management of the implementation process, while the most cited barriers dealt with the intervention's generalizability and interoperability with existing systems, as well as the inner settings' data quality and availability. We noted per-study imbalances related to the reporting of the theoretic domains. Our findings suggest a greater need for implementation science expertise in AI implementation projects, to improve both the implementation process and the quality of scientific reporting.
Collapse
Affiliation(s)
- Taridzo Chomutare
- Norwegian Centre for E-Health Research, 9019 Tromsø, Norway
- Correspondence:
| | - Miguel Tejedor
- Norwegian Centre for E-Health Research, 9019 Tromsø, Norway
| | | | | | - Maryam Tayefi
- Norwegian Centre for E-Health Research, 9019 Tromsø, Norway
| | - Karianne Lind
- Norwegian Centre for E-Health Research, 9019 Tromsø, Norway
| | - Fred Godtliebsen
- Norwegian Centre for E-Health Research, 9019 Tromsø, Norway
- Department of Mathematics and Statistics, Faculty of Science and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Anne Moen
- Norwegian Centre for E-Health Research, 9019 Tromsø, Norway
- Institute for Health and Society, Faculty of Medicine, University of Oslo, 0318 Oslo, Norway
| | - Leila Ismail
- Department of Computer Science and Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain 15551, United Arab Emirates
- National Water and Energy Center, United Arab Emirates University, Al Ain 15551, United Arab Emirates
- School of Computing and Information Systems, Faculty of Engineering and Information Technology, The University of Melbourne, Parkville, VIC 3010, Australia
| | | | | |
Collapse
|
39
|
Liopyris K, Gregoriou S, Dias J, Stratigos AJ. Artificial Intelligence in Dermatology: Challenges and Perspectives. Dermatol Ther (Heidelb) 2022; 12:2637-2651. [PMID: 36306100 PMCID: PMC9674813 DOI: 10.1007/s13555-022-00833-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 10/07/2022] [Indexed: 01/07/2023] Open
Abstract
Artificial intelligence (AI) based on machine learning and convolutional neuron networks (CNN) is rapidly becoming a realistic prospect in dermatology. Non-melanoma skin cancer is the most common cancer worldwide and melanoma is one of the deadliest forms of cancer. Dermoscopy has improved physicians' diagnostic accuracy for skin cancer recognition but unfortunately it remains comparatively low. AI could provide invaluable aid in the early evaluation and diagnosis of skin cancer. In the last decade, there has been a breakthrough in new research and publications in the field of AI. Studies have shown that CNN algorithms can classify skin lesions from dermoscopic images with superior or at least equivalent performance compared to clinicians. Even though AI algorithms have shown very promising results for the diagnosis of skin cancer in reader studies, their generalizability and applicability in everyday clinical practice remain elusive. Herein we attempted to summarize the potential pitfalls and challenges of AI that were underlined in reader studies and pinpoint strategies to overcome limitations in future studies. Finally, we tried to analyze the advantages and opportunities that lay ahead for a better future for dermatology and patients, with the potential use of AI in our practices.
Collapse
Affiliation(s)
- Konstantinos Liopyris
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens, 5 Ionos Dragoumi Str, 16121, Athens, Greece
- Dermatology Department, Memorial Sloan Kettering Cancer Center, New York, NY, 10021, USA
| | - Stamatios Gregoriou
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens, 5 Ionos Dragoumi Str, 16121, Athens, Greece.
| | - Julia Dias
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens, 5 Ionos Dragoumi Str, 16121, Athens, Greece
| | - Alexandros J Stratigos
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens, 5 Ionos Dragoumi Str, 16121, Athens, Greece
| |
Collapse
|
40
|
Mehra AA, Softing A, Guner MK, Hodge DO, Barkmeier AJ. Diabetic Retinopathy Telemedicine Outcomes With Artificial Intelligence-Based Image Analysis, Reflex Dilation, and Image Overread. Am J Ophthalmol 2022; 244:125-132. [PMID: 35970206 DOI: 10.1016/j.ajo.2022.08.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 08/03/2022] [Accepted: 08/03/2022] [Indexed: 01/30/2023]
Abstract
PURPOSE To examine real-world telemedicine outcomes of diabetic retinopathy (DR) screening with artificial intelligence (AI)-based image analysis, reflex dilation, and secondary image overread in a primary care setting. DESIGN Validity and reliability analysis. METHODS Single institution review of 1052 consecutive adult patients who received diabetic retinopathy photoscreening in the primary care setting over an 18-month period. Nonmydriatic fundus photographs were acquired and analyzed by the IDx-DR AI-based system. When nonmydriatic images were ungradable, reflex dilation (1% tropicamide) and mydriatic photography were performed for repeat AI-based analysis. Manual overread was performed on all images. Patient demographics, clinical characteristics, and screening outcomes were recorded. RESULTS A total of 965 of 1052 patients (91.7%) had AI-gradable fundus photographs: 580 had gradable nonmydriatic imaging (55.1%) and 440 of 472 patients with ungradable nonmydriatic photographs had reflex dilation (93.2%). One hundred thirty-eight of 965 patients (14.3%) were AI-graded as "positive" (greater than mild NPDR) and 827 of 965 were "negative" (85.7%), with 100% sensitivity (95% CI 90.8-100%), 89.2% specificity (95% CI 87.0-91.1%), 27.5% positive predictive value (95% CI 24.0-31.4%), and 100% negative predictive value (95% CI 99.6-100%) compared with manual overread assessment of greater than mild NPDR requiring further evaluation with a comprehensive dilated examination. Image gradeability was inversely related to patient age: 93.5% gradable (61.9% nonmydriatic) for patients aged <70 years vs 85.3% (31.0% nonmydriatic) for patients aged 70+ years (P < .001). CONCLUSION Incorporation of AI-based image analysis into real-world primary care diabetic retinopathy screening yielded no false negative results and offered excellent image gradeability within a protocol combining nonmydriatic fundus photography and pharmacologic dilation, as needed. Image gradeability was lower with increasing patient age.
Collapse
Affiliation(s)
- Ankur A Mehra
- From Department of Ophthalmology, Mayo Clinic, Rochester, USA (A.A.M, A.S, M.K.G, A.J.B)
| | - Alaina Softing
- From Department of Ophthalmology, Mayo Clinic, Rochester, USA (A.A.M, A.S, M.K.G, A.J.B)
| | | | - David O Hodge
- Department of Quantitative Health Sciences, Mayo Clinic, Jacksonville, USA (D.O.H)
| | - Andrew J Barkmeier
- Department of Quantitative Health Sciences, Mayo Clinic, Jacksonville, USA (D.O.H).
| |
Collapse
|
41
|
Carrera-Escalé L, Benali A, Rathert AC, Martín-Pinardel R, Bernal-Morales C, Alé-Chilet A, Barraso M, Marín-Martinez S, Feu-Basilio S, Rosinés-Fonoll J, Hernandez T, Vilá I, Castro-Dominguez R, Oliva C, Vinagre I, Ortega E, Gimenez M, Vellido A, Romero E, Zarranz-Ventura J. Radiomics-Based Assessment of OCT Angiography Images for Diabetic Retinopathy Diagnosis. Ophthalmol Sci 2022; 3:100259. [PMID: 36578904 PMCID: PMC9791596 DOI: 10.1016/j.xops.2022.100259] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/25/2022] [Accepted: 11/14/2022] [Indexed: 11/23/2022]
Abstract
Purpose To evaluate the diagnostic accuracy of machine learning (ML) techniques applied to radiomic features extracted from OCT and OCT angiography (OCTA) images for diabetes mellitus (DM), diabetic retinopathy (DR), and referable DR (R-DR) diagnosis. Design Cross-sectional analysis of a retinal image dataset from a previous prospective OCTA study (ClinicalTrials.govNCT03422965). Participants Patients with type 1 DM and controls included in the progenitor study. Methods Radiomic features were extracted from fundus retinographies, OCT, and OCTA images in each study eye. Logistic regression, linear discriminant analysis, support vector classifier (SVC)-linear, SVC-radial basis function, and random forest models were created to evaluate their diagnostic accuracy for DM, DR, and R-DR diagnosis in all image types. Main Outcome Measures Area under the receiver operating characteristic curve (AUC) mean and standard deviation for each ML model and each individual and combined image types. Results A dataset of 726 eyes (439 individuals) were included. For DM diagnosis, the greatest AUC was observed for OCT (0.82, 0.03). For DR detection, the greatest AUC was observed for OCTA (0.77, 0.03), especially in the 3 × 3 mm superficial capillary plexus OCTA scan (0.76, 0.04). For R-DR diagnosis, the greatest AUC was observed for OCTA (0.87, 0.12) and the deep capillary plexus OCTA scan (0.86, 0.08). The addition of clinical variables (age, sex, etc.) improved most models AUC for DM, DR and R-DR diagnosis. The performance of the models was similar in unilateral and bilateral eyes image datasets. Conclusions Radiomics extracted from OCT and OCTA images allow identification of patients with DM, DR, and R-DR using standard ML classifiers. OCT was the best test for DM diagnosis, OCTA for DR and R-DR diagnosis and the addition of clinical variables improved most models. This pioneer study demonstrates that radiomics-based ML techniques applied to OCT and OCTA images may be an option for DR screening in patients with type 1 DM. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Key Words
- AI, artificial intelligence
- AUC, area under the curve
- Artificial intelligence
- DCP, deep capillary plexus
- DM, diabetes mellitus
- DR, diabetic retinopathy
- Diabetic retinopathy
- FR, fundus retinographies
- LDA, linear discriminant analysis
- LR, logistic regression
- ML, machine learning
- Machine learning
- OCT angiography
- OCTA, OCT angiography
- R-DR, referable DR
- RF, random forest
- Radiomics
- SCP, superficial capillary plexus
- SVC, support vector classifier
- rbf, radial basis function
Collapse
Affiliation(s)
- Laura Carrera-Escalé
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Anass Benali
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Ann-Christin Rathert
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Ruben Martín-Pinardel
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain,August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain
| | | | - Anibal Alé-Chilet
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Marina Barraso
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Sara Marín-Martinez
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Silvia Feu-Basilio
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Josep Rosinés-Fonoll
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Teresa Hernandez
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Irene Vilá
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | | | - Cristian Oliva
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Irene Vinagre
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Emilio Ortega
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Marga Gimenez
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Alfredo Vellido
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Enrique Romero
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Javier Zarranz-Ventura
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,School of Medicine, Universitat de Barcelona, Spain,Correspondence: Javier Zarranz-Ventura, MD, PhD, C/ Sabino Arana 1, Barcelona 08028, Spain.
| |
Collapse
|
42
|
Bao XL, Sun YJ, Zhan X, Li GY. Orbital and eyelid diseases: The next breakthrough in artificial intelligence? Front Cell Dev Biol 2022; 10:1069248. [PMID: 36467418 PMCID: PMC9716028 DOI: 10.3389/fcell.2022.1069248] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 11/08/2022] [Indexed: 12/07/2023] Open
Abstract
Orbital and eyelid disorders affect normal visual functions and facial appearance, and precise oculoplastic and reconstructive surgeries are crucial. Artificial intelligence (AI) network models exhibit a remarkable ability to analyze large sets of medical images to locate lesions. Currently, AI-based technology can automatically diagnose and grade orbital and eyelid diseases, such as thyroid-associated ophthalmopathy (TAO), as well as measure eyelid morphological parameters based on external ocular photographs to assist surgical strategies. The various types of imaging data for orbital and eyelid diseases provide a large amount of training data for network models, which might be the next breakthrough in AI-related research. This paper retrospectively summarizes different imaging data aspects addressed in AI-related research on orbital and eyelid diseases, and discusses the advantages and limitations of this research field.
Collapse
Affiliation(s)
- Xiao-Li Bao
- Department of Ophthalmology, Second Hospital of Jilin University, Changchun, China
| | - Ying-Jian Sun
- Department of Ophthalmology, Second Hospital of Jilin University, Changchun, China
| | - Xi Zhan
- Department of Engineering, The Army Engineering University of PLA, Nanjing, China
| | - Guang-Yu Li
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
43
|
Qian X, Jingying H, Xian S, Yuqing Z, Lili W, Baorui C, Wei G, Yefeng Z, Qiang Z, Chunyan C, Cheng B, Kai M, Yi Q. The effectiveness of artificial intelligence-based automated grading and training system in education of manual detection of diabetic retinopathy. Front Public Health 2022; 10:1025271. [PMID: 36419999 PMCID: PMC9678340 DOI: 10.3389/fpubh.2022.1025271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 10/18/2022] [Indexed: 11/09/2022] Open
Abstract
Background The purpose of this study is to develop an artificial intelligence (AI)-based automated diabetic retinopathy (DR) grading and training system from a real-world diabetic dataset of China, and in particular, to investigate its effectiveness as a learning tool of DR manual grading for medical students. Methods We developed an automated DR grading and training system equipped with an AI-driven diagnosis algorithm to highlight highly prognostic related regions in the input image. Less experienced prospective physicians received pre- and post-training tests by the AI diagnosis platform. Then, changes in the diagnostic accuracy of the participants were evaluated. Results We randomly selected 8,063 cases diagnosed with DR and 7,925 with non-DR fundus images from type 2 diabetes patients. The automated DR grading system we developed achieved accuracy, sensitivity/specificity, and AUC values of 0.965, 0.965/0.966, and 0.980 for moderate or worse DR (95 percent CI: 0.976-0.984). When the graders received assistance from the output of the AI system, the metrics were enhanced in varying degrees. The automated DR grading system helped to improve the accuracy of human graders, i.e., junior residents and medical students, from 0.947 and 0.915 to 0.978 and 0.954, respectively. Conclusion The AI-based systemdemonstrated high diagnostic accuracy for the detection of DR on fundus images from real-world diabetics, and could be utilized as a training aid system for trainees lacking formal instruction on DR management.
Collapse
Affiliation(s)
- Xu Qian
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China,Key Laboratory of Cardiovascular Proteomics of Shandong Province, Jinan, China,Jinan Clinical Research Center for Geriatric Medicine (202132001), Jinan, China
| | - Han Jingying
- School of Basic Medical Sciences, Shandong University, Jinan, China
| | - Song Xian
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China
| | - Zhao Yuqing
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China
| | - Wu Lili
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China
| | - Chu Baorui
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China
| | - Guo Wei
- Lunan Eye Hospital, Linyi, China
| | | | | | | | | | - Ma Kai
- Tencent Healthcare, Shenzhen, China
| | - Qu Yi
- Department of Geriatrics, Qilu Hospital of Shandong University, Jinan, China,Key Laboratory of Cardiovascular Proteomics of Shandong Province, Jinan, China,Jinan Clinical Research Center for Geriatric Medicine (202132001), Jinan, China,*Correspondence: Qu Yi
| |
Collapse
|
44
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
45
|
Playout C, Duval R, Boucher MC, Cheriet F. Focused Attention in Transformers for interpretable classification of retinal images. Med Image Anal 2022; 82:102608. [PMID: 36150271 DOI: 10.1016/j.media.2022.102608] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 07/01/2022] [Accepted: 08/25/2022] [Indexed: 11/21/2022]
Abstract
Vision Transformers have recently emerged as a competitive architecture in image classification. The tremendous popularity of this model and its variants comes from its high performance and its ability to produce interpretable predictions. However, both of these characteristics remain to be assessed in depth on retinal images. This study proposes a thorough performance evaluation of several Transformers compared to traditional Convolutional Neural Network (CNN) models for retinal disease classification. Special attention is given to multi-modality imaging (fundus and OCT) and generalization to external data. In addition, we propose a novel mechanism to generate interpretable predictions via attribution maps. Existing attribution methods from Transformer models have the disadvantage of producing low-resolution heatmaps. Our contribution, called Focused Attention, uses iterative conditional patch resampling to tackle this issue. By means of a survey involving four retinal specialists, we validated both the superior interpretability of Vision Transformers compared to the attribution maps produced from CNNs and the relevance of Focused Attention as a lesion detector.
Collapse
|
46
|
Abstract
Diabetic retinopathy is a frequent complication in diabetes and a leading cause of visual impairment. Regular eye screening is imperative to detect sight-threatening stages of diabetic retinopathy such as proliferative diabetic retinopathy and diabetic macular oedema in order to treat these before irreversible visual loss occurs. Screening is cost-effective and has been implemented in various countries in Europe and elsewhere. Along with optimised diabetes care, this has substantially reduced the risk of visual loss. Nevertheless, the growing number of patients with diabetes poses an increasing burden on healthcare systems and automated solutions are needed to alleviate the task of screening and improve diagnostic accuracy. Deep learning by convolutional neural networks is an optimised branch of artificial intelligence that is particularly well suited to automated image analysis. Pivotal studies have demonstrated high sensitivity and specificity for classifying advanced stages of diabetic retinopathy and identifying diabetic macular oedema in optical coherence tomography scans. Based on this, different algorithms have obtained regulatory approval for clinical use and have recently been implemented to some extent in a few countries. Handheld mobile devices are another promising option for self-monitoring, but so far they have not demonstrated comparable image quality to that of fundus photography using non-portable retinal cameras, which is the gold standard for diabetic retinopathy screening. Such technology has the potential to be integrated in telemedicine-based screening programmes, enabling self-captured retinal images to be transferred virtually to reading centres for analysis and planning of further steps. While emerging technologies have shown a lot of promise, clinical implementation has been sparse. Legal obstacles and difficulties in software integration may partly explain this, but it may also indicate that existing algorithms may not necessarily integrate well with national screening initiatives, which often differ substantially between countries.
Collapse
Affiliation(s)
- Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark.
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark.
- Vestfold Hospital Trust, Tønsberg, Norway.
| |
Collapse
|
47
|
Pareja-Ríos A, Ceruso S, Romero-Aroca P, Bonaque-González S. A New Deep Learning Algorithm with Activation Mapping for Diabetic Retinopathy: Backtesting after 10 Years of Tele-Ophthalmology. J Clin Med 2022; 11:jcm11174945. [PMID: 36078875 PMCID: PMC9456446 DOI: 10.3390/jcm11174945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 08/17/2022] [Accepted: 08/22/2022] [Indexed: 11/16/2022] Open
Abstract
We report the development of a deep learning algorithm (AI) to detect signs of diabetic retinopathy (DR) from fundus images. For this, we use a ResNet-50 neural network with a double resolution, the addition of Squeeze–Excitation blocks, pre-trained in ImageNet, and trained for 50 epochs using the Adam optimizer. The AI-based algorithm not only classifies an image as pathological or not but also detects and highlights those signs that allow DR to be identified. For development, we have used a database of about half a million images classified in a real clinical environment by family doctors (FDs), ophthalmologists, or both. The AI was able to detect more than 95% of cases worse than mild DR and had 70% fewer misclassifications of healthy cases than FDs. In addition, the AI was able to detect DR signs in 1258 patients before they were detected by FDs, representing 7.9% of the total number of DR patients detected by the FDs. These results suggest that AI is at least comparable to the evaluation of FDs. We suggest that it may be useful to use signaling tools such as an aid to diagnosis rather than an AI as a stand-alone tool.
Collapse
Affiliation(s)
- Alicia Pareja-Ríos
- Department of Ophthalmology, University Hospital of the Canary Islands, 38320 San Cristóbal de La Laguna, Spain
| | - Sabato Ceruso
- School of Engineering and Technology, University of La Laguna, 38200 San Cristóbal de La Laguna, Spain
| | - Pedro Romero-Aroca
- Ophthalmology Department, University Hospital Sant Joan, Institute of Health Research Pere Virgili (IISPV), Universitat Rovira & Virgili, 43002 Tarragona, Spain
| | - Sergio Bonaque-González
- Instituto de Astrofísica de Canarias, 38205 San Cristóbal de La Laguna, Spain
- Correspondence:
| |
Collapse
|
48
|
Zipori AB, Kerley CI, Klein A, Kenney RC. Real-World Translation of Artificial Intelligence in Neuro-Ophthalmology: The Challenges of Making an Artificial Intelligence System Applicable to Clinical Practice. J Neuroophthalmol 2022. [PMID: 35921610 DOI: 10.1097/WNO.0000000000001682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
49
|
Horie S, Ohno-Matsui K. Progress of Imaging in Diabetic Retinopathy-From the Past to the Present. Diagnostics (Basel) 2022; 12:diagnostics12071684. [PMID: 35885588 PMCID: PMC9319818 DOI: 10.3390/diagnostics12071684] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/24/2022] [Accepted: 07/06/2022] [Indexed: 02/05/2023] Open
Abstract
Advancement of imaging technology in retinal diseases provides us more precise understanding and new insights into the diseases' pathologies. Diabetic retinopathy (DR) is one of the leading causes of sight-threatening retinal diseases worldwide. Colour fundus photography and fluorescein angiography have long been golden standard methods in detecting retinal vascular pathology in this disease. One of the major advancements is macular observation given by optical coherence tomography (OCT). OCT dramatically improves the diagnostic quality in macular edema in DR. The technology of OCT is also applied to angiography (OCT angiograph: OCTA), which enables retinal vascular imaging without venous dye injection. Similar to OCTA, in terms of their low invasiveness, single blue color SLO image could be an alternative method in detecting non-perfused areas. Conventional optical photography has been gradually replaced to scanning laser ophthalmoscopy (SLO), which also make it possible to produce spectacular ultra-widefield (UWF) images. Since retinal vascular changes of DR are found in the whole retina up to periphery, it would be one of the best targets in UWF imaging. Additionally, evolvement of artificial intelligence (AI) has been applied to automated diagnosis of DR, and AI-based DR management is one of the major topics in this field. This review is trying to look back on the progress of imaging of DR comprehensively from the past to the present.
Collapse
Affiliation(s)
- Shintaro Horie
- Department of Advanced Ophthalmic Imaging, Tokyo Medical and Dental University, Tokyo 113-8519, Japan;
| | - Kyoko Ohno-Matsui
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo 113-8519, Japan
- Correspondence: ; Tel.: +81-3-5803-5302
| |
Collapse
|
50
|
Usman M, Zia T, Tariq A. Analyzing Transfer Learning of Vision Transformers for Interpreting Chest Radiography. J Digit Imaging 2022; 35:1445-1462. [PMID: 35819537 PMCID: PMC9274969 DOI: 10.1007/s10278-022-00666-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 05/28/2022] [Accepted: 06/03/2022] [Indexed: 12/01/2022] Open
Abstract
Limited availability of medical imaging datasets is a vital limitation when using “data hungry” deep learning to gain performance improvements. Dealing with the issue, transfer learning has become a de facto standard, where a pre-trained convolution neural network (CNN), typically on natural images (e.g., ImageNet), is finetuned on medical images. Meanwhile, pre-trained transformers, which are self-attention-based models, have become de facto standard in natural language processing (NLP) and state of the art in image classification due to their powerful transfer learning abilities. Inspired by the success of transformers in NLP and image classification, large-scale transformers (such as vision transformer) are trained on natural images. Based on these recent developments, this research aims to explore the efficacy of pre-trained natural image transformers for medical images. Specifically, we analyze pre-trained vision transformer on CheXpert and pediatric pneumonia dataset. We use CNN standard models including VGGNet and ResNet as baseline models. By examining the acquired representations and results, we discover that transfer learning from the pre-trained vision transformer shows improved results as compared to pre-trained CNN which demonstrates a greater transfer ability of the transformers in medical imaging.
Collapse
Affiliation(s)
- Mohammad Usman
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tehseen Zia
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan.,Medical Imaging and Diagnostic Center, National Center for Artificial Intelligence, Islamabad, Pakistan
| | - Ali Tariq
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|