1
|
Nuliqiman M, Xu M, Sun Y, Cao J, Chen P, Gao Q, Xu P, Ye J. Artificial Intelligence in Ophthalmic Surgery: Current Applications and Expectations. Clin Ophthalmol 2023; 17:3499-3511. [PMID: 38026589 PMCID: PMC10674717 DOI: 10.2147/opth.s438127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/09/2023] [Indexed: 12/01/2023] Open
Abstract
Artificial Intelligence (AI) has found rapidly growing applications in ophthalmology, achieving robust recognition and classification in most kind of ocular diseases. Ophthalmic surgery is one of the most delicate microsurgery, requiring high fineness and stability of surgeons. The massive demand of the AI assist ophthalmic surgery will constitute an important factor in boosting accelerate precision medicine. In clinical practice, it is instrumental to update and review the considerable evidence of the current AI technologies utilized in the investigation of ophthalmic surgery involved in both the progression and innovation of precision medicine. Bibliographic databases including PubMed and Google Scholar were searched using keywords such as "ophthalmic surgery", "surgical selection", "candidate screening", and "robot-assisted surgery" to find articles about AI technology published from 2018 to 2023. In addition to the Editorials and letters to the editor, all types of approaches are considered. In this paper, we will provide an up-to-date review of artificial intelligence in eye surgery, with a specific focus on its application to candidate screening, surgery selection, postoperative prediction, and real-time intraoperative guidance.
Collapse
Affiliation(s)
- Maimaiti Nuliqiman
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Mingyu Xu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Yiming Sun
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Jing Cao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Pengjie Chen
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Qi Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Peifang Xu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| |
Collapse
|
2
|
Arad D, Rosenfeld A, Magnezi R. Factors contributing to preventing operating room "never events": a machine learning analysis. Patient Saf Surg 2023; 17:6. [PMID: 37004090 PMCID: PMC10067209 DOI: 10.1186/s13037-023-00356-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 03/09/2023] [Indexed: 04/03/2023] Open
Abstract
BACKGROUND A surgical "Never Event" is a preventable error occurring immediately before, during or immediately following surgery. Various factors contribute to the occurrence of major Never Events, but little is known about their quantified risk in relation to a surgery's characteristics. Our study uses machine learning to reveal and quantify risk factors with the goal of improving patient safety and quality of care. METHODS We used data from 9,234 observations on safety standards and 101 root-cause analyses from actual, major "Never Events" including wrong site surgery and retained foreign item, and three random forest supervised machine learning models to identify risk factors. Using a standard 10-cross validation technique, we evaluated the models' metrics, measuring their impact on the occurrence of the two types of Never Events through Gini impurity. RESULTS We identified 24 contributing factors in six surgical departments: two had an impact of > 900% in Urology, Orthopedics, and General Surgery; six had an impact of 0-900% in Gynecology, Urology, and Cardiology; and 17 had an impact of < 0%. Combining factors revealed 15-20 pairs with an increased probability in five departments: Gynecology, 875-1900%; Urology, 1900-2600%; Cardiology, 833-1500%; Orthopedics,1825-4225%; and General Surgery, 2720-13,600%. Five factors affected wrong site surgery's occurrence (-60.96 to 503.92%) and five affected retained foreign body (-74.65 to 151.43%): two nurses (66.26-87.92%), surgery length < 1 h (85.56-122.91%), and surgery length 1-2 h (-60.96 to 85.56%). CONCLUSIONS Using machine learning, we could quantify the risk factors' potential impact on wrong site surgeries and retained foreign items in relation to a surgery's characteristics, suggesting that safety standards should be adjusted to surgery's characteristics based on risk assessment in each operating room. . TRIAL REGISTRATION NUMBER MOH 032-2019.
Collapse
Affiliation(s)
- Dana Arad
- Department of Management, Health Management Program, Faculty of Sciences, Bar-Ilan University, Ramat Gan, Israel.
- Patient Safety Division, Ministry of Health, Ramat Gan, Israel.
| | - Ariel Rosenfeld
- Department of Information Science, Bar-Ilan University, Ramat Gan, Israel
| | - Racheli Magnezi
- Department of Management, Health Management Program, Faculty of Sciences, Bar-Ilan University, Ramat Gan, Israel
| |
Collapse
|
3
|
Xu Z, Xu J, Shi C, Xu W, Jin X, Han W, Jin K, Grzybowski A, Yao K. Artificial Intelligence for Anterior Segment Diseases: A Review of Potential Developments and Clinical Applications. Ophthalmol Ther 2023; 12:1439-1455. [PMID: 36884203 PMCID: PMC10164195 DOI: 10.1007/s40123-023-00690-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 02/13/2023] [Indexed: 03/09/2023] Open
Abstract
Artificial intelligence (AI) technology is promising in the field of healthcare. With the developments of big data and image-based analysis, AI shows potential value in ophthalmology applications. Recently, machine learning and deep learning algorithms have made significant progress. Emerging evidence has demonstrated the capability of AI in the diagnosis and management of anterior segment diseases. In this review, we provide an overview of AI applications and potential future applications in anterior segment diseases, focusing on cornea, refractive surgery, cataract, anterior chamber angle detection, and refractive error prediction.
Collapse
Affiliation(s)
- Zhe Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Jia Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Ce Shi
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Wen Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Xiuming Jin
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Wei Han
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Kai Jin
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland.
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Ke Yao
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China.
| |
Collapse
|
4
|
Zhang J, Wu J, Qiu Y, Song A, Li W, Li X, Liu Y. Intelligent speech technologies for transcription, disease diagnosis, and medical equipment interactive control in smart hospitals: A review. Comput Biol Med 2023; 153:106517. [PMID: 36623438 PMCID: PMC9814440 DOI: 10.1016/j.compbiomed.2022.106517] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 12/23/2022] [Accepted: 12/31/2022] [Indexed: 01/07/2023]
Abstract
The growing and aging of the world population have driven the shortage of medical resources in recent years, especially during the COVID-19 pandemic. Fortunately, the rapid development of robotics and artificial intelligence technologies help to adapt to the challenges in the healthcare field. Among them, intelligent speech technology (IST) has served doctors and patients to improve the efficiency of medical behavior and alleviate the medical burden. However, problems like noise interference in complex medical scenarios and pronunciation differences between patients and healthy people hamper the broad application of IST in hospitals. In recent years, technologies such as machine learning have developed rapidly in intelligent speech recognition, which is expected to solve these problems. This paper first introduces IST's procedure and system architecture and analyzes its application in medical scenarios. Secondly, we review existing IST applications in smart hospitals in detail, including electronic medical documentation, disease diagnosis and evaluation, and human-medical equipment interaction. In addition, we elaborate on an application case of IST in the early recognition, diagnosis, rehabilitation training, evaluation, and daily care of stroke patients. Finally, we discuss IST's limitations, challenges, and future directions in the medical field. Furthermore, we propose a novel medical voice analysis system architecture that employs active hardware, active software, and human-computer interaction to realize intelligent and evolvable speech recognition. This comprehensive review and the proposed architecture offer directions for future studies on IST and its applications in smart hospitals.
Collapse
Affiliation(s)
- Jun Zhang
- The State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing, 210096, China,Corresponding author
| | - Jingyue Wu
- The State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Yiyi Qiu
- The State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Aiguo Song
- The State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Weifeng Li
- Department of Emergency Medicine, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, China
| | - Xin Li
- Department of Emergency Medicine, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, China
| | - Yecheng Liu
- Emergency Department, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, 100730, China
| |
Collapse
|
5
|
Leroy G, Kauchak D, Kloehn N. Incidence and Impact of Missing Functional Elements on Information Comprehension using Audio and Text. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2022; 2021:697-706. [PMID: 35309000 PMCID: PMC8861712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Audio is increasingly used to communicate health information. Initial evaluations have shown it to be an effective means with many features that can be optimized. This study focuses on missing functional elements: words that relate concepts in a sentence but are often excluded for brevity. They are not easily recognizable without linguistics expertise but can be detected algorithmically. Two studies showed that they are common and affect comprehension. A corpus statistics study with medical (Cochrane sentences, N=44,488) and general text (English and Simple English Wikipedia sentences, N=318,056 each) showed that functional elements were missing in 20-30% of sentences. A user study with Cochrane (N=50) and Wikipedia (N=50) paragraphs in text and audio format showed that more missing functional elements increased perceived difficulty of reading text, with the effect less pronounced with audio, and increased actual difficulty of both written and audio information with less information recalled with more missing elements.
Collapse
|
6
|
Tognetto D, Giglio R, Vinciguerra AL, Milan S, Rejdak R, Rejdak M, Zaluska-Ogryzek K, Zweifel S, Toro MD. Artificial intelligence applications and cataract management: A systematic review. Surv Ophthalmol 2021; 67:817-829. [PMID: 34606818 DOI: 10.1016/j.survophthal.2021.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/27/2021] [Accepted: 09/27/2021] [Indexed: 11/26/2022]
Abstract
Artificial intelligence (AI)-based applications exhibit the potential to improve the quality and efficiency of patient care in different fields, including cataract management. A systematic review of the different applications of AI-based software on all aspects of a cataract patient's management, from diagnosis to follow-up, was carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. All selected articles were analyzed to assess the level of evidence according to the Oxford Centre for Evidence-Based Medicine 2011 guidelines, and the quality of evidence according to the Grading of Recommendations Assessment, Development and Evaluation system. Of the articles analyzed, 49 met the inclusion criteria. No data synthesis was possible for the heterogeneity of available data and the design of the available studies. The AI-driven diagnosis seemed to be comparable and, in selected cases, to even exceed the accuracy of experienced clinicians in classifying disease, supporting the operating room scheduling, and intraoperative and postoperative management of complications. Considering the heterogeneity of data analyzed, however, further randomized controlled trials to assess the efficacy and safety of AI application in the management of cataract should be highly warranted.
Collapse
Affiliation(s)
- Daniele Tognetto
- Eye Clinic, Department of Medicine, Surgery and Health Sciences, University of Trieste, Trieste, Italy
| | - Rosa Giglio
- Eye Clinic, Department of Medicine, Surgery and Health Sciences, University of Trieste, Trieste, Italy.
| | - Alex Lucia Vinciguerra
- Eye Clinic, Department of Medicine, Surgery and Health Sciences, University of Trieste, Trieste, Italy
| | - Serena Milan
- Eye Clinic, Department of Medicine, Surgery and Health Sciences, University of Trieste, Trieste, Italy
| | - Robert Rejdak
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, Lublin, Poland
| | | | | | | | - Mario Damiano Toro
- Department of Ophthalmology, University of Zurich, Zurich; Department of Medical Sciences, Collegium Medicum, Cardinal Stefan Wyszyński University, Warsaw, Poland
| |
Collapse
|
7
|
Gouda P, Ganni E, Chung P, Randhawa VK, Marquis-Gravel G, Avram R, Ezekowitz JA, Sharma A. Feasibility of Incorporating Voice Technology and Virtual Assistants in Cardiovascular Care and Clinical Trials. CURRENT CARDIOVASCULAR RISK REPORTS 2021; 15:13. [PMID: 34178205 PMCID: PMC8214838 DOI: 10.1007/s12170-021-00673-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/28/2021] [Indexed: 02/06/2023]
Abstract
PURPOSE OF REVIEW With the rising cost of cardiovascular clinical trials, there is interest in determining whether new technologies can increase cost effectiveness. This review focuses on current and potential uses of voice-based technologies, including virtual assistants, in cardiovascular clinical trials. RECENT FINDINGS Numerous potential uses for voice-based technologies have begun to emerge within cardiovascular medicine. Voice biomarkers, subtle changes in speech parameters, have emerged as a potential tool to diagnose and monitor many cardiovascular conditions, including heart failure, coronary artery disease, and pulmonary hypertension. With the increasing use of virtual assistants, numerous pilot studies have examined whether these devices can supplement initiatives to promote transitional care, physical activity, smoking cessation, and medication adherence with promising initial results. Additionally, these devices have demonstrated the ability to streamline data collection by administering questionnaires accurately and reliably. With the use of these technologies, there are several challenges that must be addressed before wider implementation including respecting patient privacy, maintaining regulatory standards, acceptance by patients and healthcare providers, determining the validity of voice-based biomarkers and endpoints, and increased accessibility. SUMMARY Voice technology represents a novel and promising tool for cardiovascular clinical trials; however, research is still required to understand how it can be best harnessed.
Collapse
Affiliation(s)
- Pishoy Gouda
- Division of Cardiology, University of Alberta, Edmonton, Alberta Canada
| | - Elie Ganni
- DREAM-CV Lab, McGill University Health Centre, McGill University, Montreal, Quebec, Canada
| | - Peter Chung
- DREAM-CV Lab, McGill University Health Centre, McGill University, Montreal, Quebec, Canada
| | - Varinder Kaur Randhawa
- Department of Cardiovascular Medicine, Kaufman Center for Heart Failure and Recovery, Heart, Thoracic, and Vascular Institute, Cleveland Clinic, Cleveland, OH USA
| | | | - Robert Avram
- Montreal Heart Institute, Université de Montréal, Montreal, Quebec, Canada ,Division of Cardiology, Department of Medicine, Ottawa Heart Institute, University of Ottawa, Ottawa, Ontario Canada
| | - Justin A. Ezekowitz
- Division of Cardiology, University of Alberta, Edmonton, Alberta Canada ,Canadian VIGOUR Centre, University of Alberta, Edmonton, Alberta Canada
| | - Abhinav Sharma
- DREAM-CV Lab, McGill University Health Centre, McGill University, Montreal, Quebec, Canada ,Division of Cardiology, McGill University Health Centre, McGill University, 1001 Decarie Blvd, Montreal, Quebec, H4A 3J1 Canada
| |
Collapse
|
8
|
Yoo TK, Ryu IH, Kim JK, Lee IS, Kim JS, Kim HK, Choi JY. Deep learning can generate traditional retinal fundus photographs using ultra-widefield images via generative adversarial networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105761. [PMID: 32961385 DOI: 10.1016/j.cmpb.2020.105761] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 09/12/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal imaging has two major modalities, traditional fundus photography (TFP) and ultra-widefield fundus photography (UWFP). This study demonstrates the feasibility of a state-of-the-art deep learning-based domain transfer from UWFP to TFP. METHODS A cycle-consistent generative adversarial network (CycleGAN) was used to automatically translate the UWFP to the TFP domain. The model was based on an unpaired dataset including anonymized 451 UWFP and 745 TFP images. To apply CycleGAN to an independent dataset, we randomly divided the data into training (90%) and test (10%) datasets. After automated image registration and masking dark frames, the generator and discriminator networks were trained. Additional twelve publicly available paired TFP and UWFP images were used to calculate the intensity histograms and structural similarity (SSIM) indices. RESULTS We observed that all UWFP images were successfully translated into TFP-style images by CycleGAN, and the main structural information of the retina and optic nerve was retained. The model did not generate fake features in the output images. Average histograms demonstrated that the intensity distribution of the generated output images provided a good match to the ground truth images, with an average SSIM level of 0.802. CONCLUSIONS Our approach enables automated synthesis of TFP images directly from UWFP without a manual pre-conditioning process. The generated TFP images might be useful for clinicians in investigating posterior pole and for researchers in integrating TFP and UWFP databases. This is also likely to save scan time and will be more cost-effective for patients by avoiding additional examinations for an accurate diagnosis.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | | | | | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| | - Joon Yul Choi
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, Ohio, United States
| |
Collapse
|