1
|
Spencer SKR, Ireland PA, Braden J, Hepschke JL, Lin M, Zhang H, Channell J, Razavi H, Turner AW, Coroneo MT, Shulruf B, Agar A. A Systematic Review of Ophthalmology Education in Medical Schools: The Global Decline. Ophthalmology 2024; 131:855-863. [PMID: 38185285 DOI: 10.1016/j.ophtha.2024.01.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 12/15/2023] [Accepted: 01/02/2024] [Indexed: 01/09/2024] Open
Abstract
TOPIC This systematic review examined geographical and temporal trends in medical school ophthalmology education in relationship to course and student outcomes. CLINICAL RELEVANCE Evidence suggesting a decline in ophthalmology teaching in medical schools is increasing, raising concern for the adequacy of eye knowledge across the rest of the medical profession. METHODS Systematic review of Embase and SCOPUS, with inclusion of studies containing data on medical school ophthalmic course length; 1 or more outcome measures on student ophthalmology knowledge, skills, self-evaluation of knowledge or skills, or student course appraisal; or both. The systematic review was registered prospectively on the International Prospective Register of Systematic Reviews (identifier, CRD42022323865). Results were aggregated with outcome subgroup analysis and description in relationship to geographical and temporal trends. Descriptive statistics, including nonparametric correlations, were used to analyze data and trends. RESULTS Systematic review yielded 4596 publication titles, of which 52 were included in the analysis, with data from 19 countries. Average course length ranged from 12.5 to 208.7 hours, with significant continental disparity among mean course lengths. Africa reported the longest average course length at 103.3 hours, and North America reported the shortest at 36.4 hours. On average, course lengths have been declining over the last 2 decades, from an average overall course length of 92.9 hours in the 2000s to 52.9 hours in the 2020s. Mean student self-evaluation of skills was 51.3%, and mean student self-evaluation of knowledge was 55.4%. Objective mean assessment mark of skills was 57.5% and that of knowledge was 71.7%, compared with an average pass mark of 66.7%. On average, 26.4% of students felt confident in their ophthalmology knowledge and 34.5% felt confident in their skills. DISCUSSION Most evidence describes declining length of courses devoted to ophthalmology in the last 20 years, significant student dissatisfaction with courses and content, and suboptimal knowledge and confidence. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Sascha K R Spencer
- The University of New South Wales, Sydney, Australia; Prince of Wales Hospital, Sydney, New South Wales, Australia
| | - Patrick A Ireland
- The University of New South Wales, Sydney, Australia; Prince of Wales Hospital, Sydney, New South Wales, Australia
| | - Jorja Braden
- The University of Sydney, Sydney, New South Wales, Australia; Melanoma Institute of Australia, Sydney, New South Wales, Australia
| | - Jenny L Hepschke
- The University of New South Wales, Sydney, Australia; Prince of Wales Hospital, Sydney, New South Wales, Australia
| | - Michael Lin
- The University of New South Wales, Sydney, Australia
| | - Helen Zhang
- The University of New South Wales, Sydney, Australia
| | - Jessie Channell
- University of Western Australia, Perth, Western Australia, Australia; Lions Eye Institute, Perth, Western Australia, Australia
| | - Hessom Razavi
- University of Western Australia, Perth, Western Australia, Australia; Lions Eye Institute, Perth, Western Australia, Australia
| | - Angus W Turner
- University of Western Australia, Perth, Western Australia, Australia; Lions Eye Institute, Perth, Western Australia, Australia
| | - Minas T Coroneo
- The University of New South Wales, Sydney, Australia; Prince of Wales Hospital, Sydney, New South Wales, Australia
| | - Boaz Shulruf
- The University of New South Wales, Sydney, Australia
| | - Ashish Agar
- The University of New South Wales, Sydney, Australia; Prince of Wales Hospital, Sydney, New South Wales, Australia.
| |
Collapse
|
2
|
Jonnakuti VS, Frankfort BJ. Seeing beyond reality: considering the impact of mainstream virtual reality adoption on ocular health and the evolving role of ophthalmologists. Eye (Lond) 2024; 38:1401-1402. [PMID: 38097803 PMCID: PMC11126567 DOI: 10.1038/s41433-023-02892-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 11/18/2023] [Accepted: 12/04/2023] [Indexed: 05/26/2024] Open
Affiliation(s)
- Venkata Soumith Jonnakuti
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, 77030, USA
- Jan and Dan Duncan Neurological Research Institute, Texas Children's Hospital, Houston, TX, 77030, USA
- Program in Quantitative and Computational Biology, Baylor College of Medicine, Houston, TX, 77030, USA
- Medical Scientist Training Program, Baylor College of Medicine, Houston, TX, 77030, USA
| | - Benjamin Jay Frankfort
- Medical Scientist Training Program, Baylor College of Medicine, Houston, TX, 77030, USA.
- Department of Ophthalmology, Baylor College of Medicine, Houston, TX, USA.
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
3
|
Ramesh PV, Ray P, Joshua T, Devadas AK, Raj PM, Ramesh SV, Ramesh MK, Rajasekaran R. The photoreal new-age innovative pedagogical & counseling tool for glaucoma with 3D augmented reality (Eye MG AR). Eur J Ophthalmol 2024; 34:870-873. [PMID: 36880748 DOI: 10.1177/11206721231159249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
In this manuscript, we have reported an augmented reality (AR) application named, 'Eye MG AR' innovated by us, to show different anatomical/pathological parts of the eyeball pertaining to glaucoma, from multiple customized angles of the user's choice to simplify glaucoma learning and clinical counseling. It is available free of cost from the Google Play Store for Android users. Procedures ranging from a simple outpatient department procedure (yttrium aluminium garnet peripheral iridotomy) to a complex surgical technique (trabeculectomy/tube surgery) can be explained and counseled with this Android application. Also, complex structures such as the angle of the anterior chamber and optic nerve head, are constructed in advanced real-time three-dimensional (3D) high-resolution confocal images. These 3D models are useful for glaucoma neophytes' immersive learning and 3D patient counseling experiences. This AR tool aims to reinvent the approach to glaucoma counseling with 'Unreal Engine' software and is created in a patient-friendly approach. Incepting 3D pedagogy and counseling with AR in glaucoma with real-time and high-resolution TrueColor confocal images has never been reported in the literature according to our knowledge.
Collapse
Affiliation(s)
- Prasanna Venkatesh Ramesh
- Department of Glaucoma and Research, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Prajnya Ray
- Department of Optometry and Visual Science, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Tensingh Joshua
- Department of Research and Animation, Mahathma Centre of Moving Images Private Limited, Trichy, Tamil Nadu, India
| | - Aji Kunnath Devadas
- Department of Optometry and Visual Science, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Prakash Michael Raj
- Department of Research and Animation, Mahathma Centre of Moving Images Private Limited, Trichy, Tamil Nadu, India
| | - Shruthy Vaishali Ramesh
- Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Meena Kumari Ramesh
- Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Ramesh Rajasekaran
- Department of Paediatric Ophthalmology and Strabismus, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| |
Collapse
|
4
|
Guo Q, Zhang L, Han LL, Gui C, Chen G, Ling C, Wang W, Gao Q. Effects of Virtual Reality Therapy Combined With Conventional Rehabilitation on Pain, Kinematic Function, and Disability in Patients With Chronic Neck Pain: Randomized Controlled Trial. JMIR Serious Games 2024; 12:e42829. [PMID: 38656775 PMCID: PMC11079768 DOI: 10.2196/42829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 10/16/2023] [Accepted: 03/17/2024] [Indexed: 04/26/2024] Open
Abstract
BACKGROUND Neck pain is a common condition that leads to neck motor dysfunction and subsequent disability, with a significant global health care burden. As a newly emerging tool, virtual reality (VR) technology has been employed to address pain and reduce disability among patients with neck pain. However, there is still a lack of high-quality studies evaluating the efficacy of VR therapy combined with conventional rehabilitation for patients with chronic neck pain, particularly in terms of kinematic function. OBJECTIVE This study aims to investigate the effect of VR therapy combined with conventional rehabilitation on pain, kinematic function, and disability in patients with chronic neck pain. METHODS We conducted an assessor-blinded, allocation-concealed randomized controlled trial. Sixty-four participants experiencing chronic neck pain were randomly allocated into the experimental group that underwent VR rehabilitation plus conventional rehabilitation or the control group receiving the same amount of conventional rehabilitation alone for 10 sessions over 4 weeks. Pain intensity, disability, kinematic function (cervical range of motion, proprioception, and mean and peak velocity), degree of satisfaction, and relief of symptoms were evaluated at 3 timepoints (baseline, postintervention, and at 3 months follow-up). A 2*3 mixed repeated measures analysis of variance was utilized for analyzing the difference across indicators, with a significant difference level of .05. RESULTS Both groups demonstrated significant improvements in pain, disability, and kinematic functions (P<.05) at postintervention and at 3-month follow-up. The experimental group showed superior therapeutic outcomes compared to the control group in pain reduction (mean difference from the baseline: 5.50 vs 1.81 at posttreatment; 5.21 vs 1.91 at the 3-month follow-up, respectively; P<.001), disability improvement (mean difference from baseline: 3.04 vs 0.50 at posttreatment; 3.20 vs 0.85 at the 3-month follow-up, respectively; P<.001), and enhanced kinematic functions (P<.05). Moreover, participants in the experimental group reported better satisfaction and relief of symptoms than the control group (P<.05), with better initiative for exercising during the follow-up period. However, there was no between-group difference of improvement in proprioception. No adverse events were reported or observed in our research. CONCLUSIONS The findings of our study support the efficacy of combining VR therapy with conventional rehabilitation in alleviating pain, enhancing kinematic function, and reducing disability of patients with chronic neck pain. Future research should focus on refining the therapeutic protocols and dosages for VR therapy as well as on optimizing its application in clinical settings for improved convenience and effectiveness. TRIAL REGISTRATION Chinese Clinical Trial Registry ChiCTR2000040132; http://www.chictr.org.cn/showproj.aspx?proj=64346.
Collapse
Affiliation(s)
- Qifan Guo
- West China Hospital, Sichuan University, Chengdu, China
- Department of Rehabilitation Medicine, West China Hospital, Sichuan University, Chengdu, China
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Liming Zhang
- West China Hospital, Sichuan University, Chengdu, China
- Department of Rehabilitation Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Leo Lianyi Han
- Biostatistics Group, State Key Laboratory of Genetic Engineering, Greater Bay Area Institute of Precision Medicine (Guangzhou), Fudan University, Guangzhou, China
| | - Chenfan Gui
- West China Hospital, Sichuan University, Chengdu, China
| | - Guanghui Chen
- Department of Traumatology and Orthopedics of Traditional Chinese Medicine, The First Affiliated Hospital of Guangxi University of Chinese Medicine, Nanning, China
| | - Chunyan Ling
- Department of Acupuncture and Tuina, The First Affiliated Hospital of Guangxi University of Chinese Medicine, Nanning, China
| | - Wei Wang
- Department of Acupuncture and Tuina, The First Affiliated Hospital of Guangxi University of Chinese Medicine, Nanning, China
| | - Qiang Gao
- West China Hospital, Sichuan University, Chengdu, China
- Department of Rehabilitation Medicine, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
5
|
Waisberg E, Ong J, Masalkhi M, Zaman N, Sarker P, Lee AG, Tavakkoli A. The future of ophthalmology and vision science with the Apple Vision Pro. Eye (Lond) 2024; 38:242-243. [PMID: 37542175 PMCID: PMC10810972 DOI: 10.1038/s41433-023-02688-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 07/20/2023] [Accepted: 07/27/2023] [Indexed: 08/06/2023] Open
Affiliation(s)
- Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland.
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA
- University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Texas A&M College of Medicine, Bryan, TX, USA
- Department of Ophthalmologys, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| |
Collapse
|
6
|
Alvarez-Falcón S, Wang B, Taleb E, Cheung NL, Scriven CA, Priestley Y, El-Dairi M, Freedman SF. Performance of VisuALL virtual reality visual field testing in healthy children. J AAPOS 2024; 28:103802. [PMID: 38219921 DOI: 10.1016/j.jaapos.2023.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 10/10/2023] [Accepted: 10/15/2023] [Indexed: 01/16/2024]
Abstract
BACKGROUND Virtual reality field testing may provide an alternative to standard automated perimetry. This study evaluates a virtual reality game-based automated perimetry in a healthy pediatric population. METHODS A prospective series of pediatric patients at one institution who performed VisuALL perimetry (Olleyes Inc, Summit, NJ) using a game-based algorithm. Participants were examined by an experienced pediatric optometrist or ophthalmologist, who confirmed that there was no evidence of ocular disease expected to affect visual fields. Testing was performed binocularly, with the child wearing their spectacle correction in place. Age, refractive error, test duration, false positives, and stereoacuity were evaluated for associations with performance on VisuALL, as defined by mean deviation (MD) and pattern standard deviation (PSD). RESULTS A total of 191 eyes of 97 patients (54% female) were included, with a mean age of 11.9 ± 3.1 years. The average MD was -1.82 ± 3.5 dB, with a mean foveal sensitivity of 32.0 ± 4.7 dB. Fifty-nine eyes (30.9%) had MD < -2 dB. Better performance, as assessed by MD and PSD, was associated with shorter test duration (P < 0.001) and older age (P < 0.001). False positives (P = 0.442), wearing spectacles (P = 0.092), Titmus stereoacuity (P = 0.197), and refractive error (P = 0.120) did not appear to be associated with improved performance, adjusting for age as a covariate. CONCLUSIONS VisuALL virtual reality field testing was well tolerated in this pediatric study cohort. Older age and shorter test duration were associated with better performance on field testing.
Collapse
Affiliation(s)
| | - Bo Wang
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland
| | - Emma Taleb
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Nathan L Cheung
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Chelsea A Scriven
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Yos Priestley
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Mays El-Dairi
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Sharon F Freedman
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina.
| |
Collapse
|
7
|
Ahmed Y, Reddy M, Mederos J, McDermott KC, Varma DK, Ludwig CA, Ahmed IK, Khaderi KR. Democratizing Health Care in the Metaverse: How Video Games can Monitor Eye Conditions Using the Vision Performance Index: A Pilot Study. OPHTHALMOLOGY SCIENCE 2024; 4:100349. [PMID: 37869021 PMCID: PMC10587622 DOI: 10.1016/j.xops.2023.100349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 05/23/2023] [Accepted: 05/30/2023] [Indexed: 10/24/2023]
Abstract
Objective In a world where digital media is deeply engrained into our everyday lives, there lies an opportunity to leverage interactions with technology for health and wellness. The Vision Performance Index (VPI) leverages natural human-technology interaction to evaluate visual function using visual, cognitive, and motor psychometric data over 5 domains: field of view, accuracy, multitracking, endurance, and detection. The purpose of this study was to describe a novel method of evaluating holistic visual function through video game-derived VPI score data in patients with specific ocular pathology. Design Prospective comparative analysis. Participants Patients with dry eye, glaucoma, cataract, diabetic retinopathy (DR), age-related macular degeneration, and healthy individuals. Methods The Vizzario Inc software development kit was integrated into 2 video game applications, Balloon Pop and Picture Perfect, which allowed for generation of VPI scores. Study participants were instructed to play rounds of each video game, from which a VPI score was compiled. Main Outcome Measures The primary outcome was VPI overall score in each comparison group. Vision Performance Index component, subcomponent scores, and psychophysical inputs were also compared. Results Vision Performance Index scores were generated from 93 patients with macular degeneration (n = 10), cataract (n = 10), DR (n = 15), dry eye (n = 15), glaucoma (n = 16), and no ocular disease (n = 27). The VPI overall score was not significantly different across comparison groups. The VPI subcomponent "reaction accuracy" score was significantly greater in DR patients (106 ± 13.2) versus controls (96.9 ± 11.5), P = 0.0220. The VPI subcomponent "color detection" score was significantly lower in patients with DR (96.8 ± 2.5; p=0.0217) and glaucoma (98.5 ± 6.3; P = 0.0093) compared with controls (101 ± 11). Psychophysical measures were statistically significantly different from controls: proportion correct (lower in DR, age-related macular degeneration), contrast errors (higher in cataract, DR), and saturation errors (higher in dry eye). Conclusions Vision Performance Index scores can be generated from interactions of an ocular disease population with video games. The VPI may offer utility in monitoring select ocular diseases through evaluation of subcomponent and psychophysical input scores; however, future larger-scale studies must evaluate the validity of this tool. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Yusuf Ahmed
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Mohan Reddy
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Vizzario, Inc, Venice, California
| | - Jacob Mederos
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Vizzario, Inc, Venice, California
| | - Kyle C. McDermott
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Vizzario, Inc, Venice, California
| | - Devesh K. Varma
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Prism Eye Institute, Oakville, Ontario, Canada
| | - Cassie A. Ludwig
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Iqbal K. Ahmed
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Prism Eye Institute, Oakville, Ontario, Canada
- Moran Eye Centre, University of Utah School of Medicine, Salt Lake City, Utah
| | - Khizer R. Khaderi
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Vizzario, Inc, Venice, California
| |
Collapse
|
8
|
Wong KA, Ang BCH, Gunasekeran DV, Husain R, Boon J, Vikneson K, Tan ZPQ, Tan GSW, Wong TY, Agrawal R. Remote Perimetry in a Virtual Reality Metaverse Environment for Out-of-Hospital Functional Eye Screening Compared Against the Gold Standard Humphrey Visual Fields Perimeter: Proof-of-Concept Pilot Study. J Med Internet Res 2023; 25:e45044. [PMID: 37856179 PMCID: PMC10623222 DOI: 10.2196/45044] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 04/01/2023] [Accepted: 05/31/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND The growing global burden of visual impairment necessitates better population eye screening for early detection of eye diseases. However, accessibility to testing is often limited and centralized at in-hospital settings. Furthermore, many eye screening programs were disrupted by the COVID-19 pandemic, presenting an urgent need for out-of-hospital solutions. OBJECTIVE This study investigates the performance of a novel remote perimetry application designed in a virtual reality metaverse environment to enable functional testing in community-based and primary care settings. METHODS This was a prospective observational study investigating the performance of a novel remote perimetry solution in comparison with the gold standard Humphrey visual field (HVF) perimeter. Subjects received a comprehensive ophthalmologic assessment, HVF perimetry, and remote perimetry testing. The primary outcome measure was the agreement in the classification of overall perimetry result normality by the HVF (Swedish interactive threshold algorithm-fast) and testing with the novel algorithm. Secondary outcome measures included concordance of individual testing points and perimetry topographic maps. RESULTS We recruited 10 subjects with an average age of 59.6 (range 28-81) years. Of these, 7 (70%) were male and 3 (30%) were female. The agreement in the classification of overall perimetry results was high (9/10, 90%). The pointwise concordance in the automated classification of individual test points was 83.3% (8.2%; range 75%-100%). In addition, there was good perimetry topographic concordance with the HVF in all subjects. CONCLUSIONS Remote perimetry in a metaverse environment had good concordance with gold standard perimetry using the HVF and could potentially avail functional eye screening in out-of-hospital settings.
Collapse
Affiliation(s)
- Kang-An Wong
- National University of Singapore, Singapore, Singapore
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
| | - Bryan Chin Hou Ang
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
| | - Dinesh Visva Gunasekeran
- National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Raffles Medical Group, Singapore, Singapore
- Eye-ACP, Duke-NUS Medical School, Singapore, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore, Singapore
- Eye-ACP, Duke-NUS Medical School, Singapore, Singapore
- School of Medicine, University of New South Wales, Sydney, Australia
| | - Joewee Boon
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
| | - Krishna Vikneson
- School of Medicine, University of New South Wales, Sydney, Australia
| | - Zyna Pei Qi Tan
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Eye-ACP, Duke-NUS Medical School, Singapore, Singapore
- Singapore National Eye Center, Singapore General Hospital, Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore, Singapore
- Eye-ACP, Duke-NUS Medical School, Singapore, Singapore
- Singapore National Eye Center, Singapore General Hospital, Singapore, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Rupesh Agrawal
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
9
|
Wang T, Li H, Pu T, Yang L. Microsurgery Robots: Applications, Design, and Development. SENSORS (BASEL, SWITZERLAND) 2023; 23:8503. [PMID: 37896597 PMCID: PMC10611418 DOI: 10.3390/s23208503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Collapse
Affiliation(s)
- Tiexin Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haoyu Li
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Tanhong Pu
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
10
|
Sarker P, Ong J, Zaman N, Kamran SA, Waisberg E, Paladugu P, Lee AG, Tavakkoli A. Extended reality quantification of pupil reactivity as a non-invasive assessment for the pathogenesis of spaceflight associated neuro-ocular syndrome: A technology validation study for astronaut health. LIFE SCIENCES IN SPACE RESEARCH 2023; 38:79-86. [PMID: 37481311 DOI: 10.1016/j.lssr.2023.06.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/26/2023] [Accepted: 06/01/2023] [Indexed: 07/24/2023]
Abstract
The National Aeronautics and Space Administration (NASA) has rigorously documented a group of neuro-ophthalmic findings in astronauts during and after long-duration spaceflight known as spaceflight associated neuro-ocular syndrome (SANS). For astronaut safety and mission effectiveness, understanding SANS and countermeasure development are of utmost importance. Although the pathogenesis of SANS is not well defined, a leading hypothesis is that SANS might relate to a sub-clinical increased intracranial pressure (ICP) from cephalad fluid shifts in microgravity. However, no direct ICP measurements are available during spaceflight. To further understand the role of ICP in SANS, pupillometry can serve as a promising non-invasive biomarker for spaceflight environment as ICP is correlated with the pupil variables under illumination. Extended reality (XR) can help to address certain limitations in current methods for efficient pupil testing during spaceflight. We designed a protocol to quantify parameters of pupil reactivity in XR with an equivalent time duration of illumination on each eye compared to pre-existing, non-XR methods. Throughout the assessment, the pupil diameter data was collected using HTC Vive Pro-VR headset, thanks to its eye-tracking capabilities. Finally, the data was used to compute several pupil variables. We applied our methods to 36 control subjects. Pupil variables such as maximum and minimum pupil size, constriction amplitude, average constriction amplitude, maximum constriction velocity, latency and dilation velocity were computed for each control data. We compared our methods of calculation of pupil variables with the non-XR methods existing in the literature. Distributions of the pupil variables such as latency, constriction amplitude, and velocity of 36 control data displayed near-identical results from the non-XR literature for normal subjects. We propose a new method to evaluate pupil reactivity with XR technology to further understand ICP's role in SANS and provide further insight into SANS countermeasure development for future spaceflight.
Collapse
Affiliation(s)
- Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Nevada, United States
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, Michigan, United States
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Nevada, United States
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Nevada, United States
| | - Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Phani Paladugu
- Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States; Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania, United States
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, Texas, United States; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, Texas, United States; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, Texas, United States; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, New York, United States; Department of Ophthalmology, University of Texas Medical Branch, Galveston, Texas, United States; University of Texas MD Anderson Cancer Center, Houston, Texas, United States; Texas A&M College of Medicine, Texas, United States; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, Iowa, United States
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Nevada, United States.
| |
Collapse
|
11
|
Csoba I, Kunkli R. Rendering algorithms for aberrated human vision simulation. Vis Comput Ind Biomed Art 2023; 6:5. [PMID: 36930412 PMCID: PMC10023823 DOI: 10.1186/s42492-023-00132-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 03/06/2023] [Indexed: 03/18/2023] Open
Abstract
Vision-simulated imagery-the process of generating images that mimic the human visual system-is a valuable tool with a wide spectrum of possible applications, including visual acuity measurements, personalized planning of corrective lenses and surgeries, vision-correcting displays, vision-related hardware development, and extended reality discomfort reduction. A critical property of human vision is that it is imperfect because of the highly influential wavefront aberrations that vary from person to person. This study provides an overview of the existing computational image generation techniques that properly simulate human vision in the presence of wavefront aberrations. These algorithms typically apply ray tracing with a detailed description of the simulated eye or utilize the point-spread function of the eye to perform convolution on the input image. Based on the description of the vision simulation techniques, several of their characteristic features have been evaluated and some potential application areas and research directions have been outlined.
Collapse
Affiliation(s)
- István Csoba
- Faculty of Informatics, University of Debrecen, Debrecen 4028, Hungary. .,Doctoral School of Informatics, University of Debrecen, Debrecen 4028, Hungary.
| | - Roland Kunkli
- Faculty of Informatics, University of Debrecen, Debrecen 4028, Hungary
| |
Collapse
|
12
|
Pur DR, Lee-Wing N, Bona MD. The use of augmented reality and virtual reality for visual field expansion and visual acuity improvement in low vision rehabilitation: a systematic review. Graefes Arch Clin Exp Ophthalmol 2023; 261:1743-1755. [PMID: 36633669 DOI: 10.1007/s00417-022-05972-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 10/31/2022] [Accepted: 12/30/2022] [Indexed: 01/13/2023] Open
Abstract
INTRODUCTION Developments in image processing techniques and display technology have led to the emergence of augmented reality (AR) and virtual reality (VR)-based low vision devices (LVDs). However, their promise and limitations in low vision rehabilitation are poorly understood. The objective of this systematic review is to appraise the application of AR/VR LVDs aimed at visual field expansion and visual acuity improvement in low vision rehabilitation. METHODS A systematic search of the literature was performed using MEDLINE, Embase, PsychInfo, HealthStar, and National Library of Medicine (PubMed) from inception to March 6, 2022. Articles were eligible if they included an AR or VR LVD tested on a sample of individuals with low vision and provided visual outcomes such as visual acuity, visual fields, and object recognition. RESULTS Of the 652 articles identified, 16 studies comprising 382 individuals with a mean age of 52.17 (SD = 18.30) years, and with heterogeneous low vision etiologies (i.e., glaucoma, age-related macular degeneration, retinitis pigmentosa) were included in this systematic review. Most articles used AR (53%), VR (40%), and one article used both AR and VR. The main visual outcomes evaluated were visual fields (67%), visual acuity (65%), and contrast sensitivity (27%). Various visual enhancement techniques were employed including variable magnification using digital zoom (67%), contrast enhancements (53%), and minification (27%). AR LVDs were reported to expand the visual field from threefold to ninefold. On average, individuals using AR/VR LVDs experienced an improved in visual acuity from 0.9 to 0.2 logMAR. Ten articles were classified as high or moderate risk of bias. CONCLUSION AR/VR LVDs were found to afford visual field expansion and visual acuity improvement in low vision populations. Even though the results of this review are promising, the lack of controlled studies with well-defined populations, use of small, convenience samples, and incomplete reporting of inclusion and exclusion criteria among included studies makes it challenging to judge the true impact of these devices. Future studies should address these limitations and compare various AR/LVDs to determine what is the ideal LVD type and vision enhancement combination based on the user's level of visual ability and lifestyle.
Collapse
Affiliation(s)
- Daiana R Pur
- Schulich School of Medicine and Dentistry, Western University, London, ON, Canada.
| | - Nathan Lee-Wing
- Max Randy College of Medicine, University of Manitoba, Winnipeg, MB, Canada
| | - Mark D Bona
- Department of Ophthalmology, Queen's University and Hotel Dieu Hospital, Kingston, ON, Canada
| |
Collapse
|
13
|
Chan HS, Tang YM, Do CW, Ho Yin Wong H, Chan LYL, To S. Design and assessment of amblyopia, strabismus, and myopia treatment and vision training using virtual reality. Digit Health 2023; 9:20552076231176638. [PMID: 37312939 PMCID: PMC10259136 DOI: 10.1177/20552076231176638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 05/02/2023] [Indexed: 06/15/2023] Open
Abstract
Background Virtual reality is a relatively new intervention that has the potential to be used in the treatment of eye and vision problems. This article reviews the use of virtual reality-related interventions in amblyopia, strabismus, and myopia research. Methods Sources covered in the review included 48 peer-reviewed research published between January 2000 and January 2023 from five electronic databases (ACM Digital Library, IEEE Xplore, PubMed, ScienceDirect and Web of Science). To prevent any missing relevant articles, the keywords, and terms used in the search included "VR", "virtual reality", "amblyopia", "strabismus," and "myopia". Quality assessment and data extraction were performed independently by two authors to form a narrative synthesis to summarize findings from the included research. Results Total number of 48 references were reviewed. There were 31 studies published on amblyopia, 18 on strabismus, and 6 on myopia, with 7 studies overlapping amblyopia and strabismus. In terms of technology, smartphone-based virtual reality headset viewers were utilized more often in amblyopia research, but commercial standalone virtual reality headsets were used more frequently in myopia and strabismus-related research. The software and virtual environment were mostly developed based on vision therapy and dichoptic training paradigms. Conclusion It has been suggested that virtual reality technology offers a potentially effective tool for amblyopia, strabismus, and myopia studies. Nonetheless, a variety of factors, especially the virtual environment and systems employed in the data presented, must be explored before determining whether virtual reality can be effectively applied in clinical settings. This review is significant as the technology in virtual reality software and application design features have been investigated and considered for future reference.
Collapse
Affiliation(s)
- Hoi Sze Chan
- Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Yuk Ming Tang
- Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Chi Wai Do
- School of Optometry, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Horace Ho Yin Wong
- School of Optometry, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Lily YL Chan
- School of Optometry, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Suet To
- Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| |
Collapse
|
14
|
Aghapour M, Bockstahler B. State of the Art and Future Prospects of Virtual and Augmented Reality in Veterinary Medicine: A Systematic Review. Animals (Basel) 2022; 12:3517. [PMID: 36552437 PMCID: PMC9774422 DOI: 10.3390/ani12243517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/06/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022] Open
Abstract
Virtual reality and augmented reality are new but rapidly expanding topics in medicine. In virtual reality, users are immersed in a three-dimensional environment, whereas in augmented reality, computer-generated images are superimposed on the real world. Despite advances in human medicine, the number of published articles in veterinary medicine is low. These cutting-edge technologies can be used in combination with existing methods in veterinary medicine to achieve diagnostic/therapeutic and educational goals. The purpose of our review was to evaluate studies for their use of virtual reality and augmented reality in veterinary medicine, as well as human medicine with animal trials, to report results and the state of the art. We collected all of the articles we included in our review by screening the Scopus, PubMed, and Web of Science databases. Of the 24 included studies, 11 and 13 articles belonged to virtual reality and augmented reality, respectively. Based on these articles, we determined that using these technologies has a positive impact on the scientific output of students and residents, can reduce training costs, and can be used in training/educational programs. Furthermore, using these tools can promote ethical standards. We reported the absence of standard operation protocols and equipment costs as study limitations.
Collapse
Affiliation(s)
- Masoud Aghapour
- Section of Physical Therapy, Small Animal Surgery, Department for Companion Animals and Horses, University of Veterinary Medicine, 1210 Vienna, Austria
| | | |
Collapse
|
15
|
Ong J, Hariprasad SM, Chhablani J. Into the RetinaVerse: A New Frontier of Retina in the Metaverse. Ophthalmic Surg Lasers Imaging Retina 2022; 53:595-600. [PMID: 36378613 DOI: 10.3928/23258160-20221017-01] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
16
|
Akkara JD, Kuriakose A. Commentary: Opening eyes to the mixed reality metaverse. Indian J Ophthalmol 2022; 70:3121-3122. [PMID: 35918984 PMCID: PMC9672756 DOI: 10.4103/ijo.ijo_847_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Affiliation(s)
- John D Akkara
- Department of Ophthalmology, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu; Department of Glaucoma, Westend Eye Hospital, Cochin, Kerala, India
| | - Anju Kuriakose
- Department of Retina, Aravind Eye Hospital, Chennai, Tamil Nadu, India
| |
Collapse
|
17
|
Ramesh PV, Joshua T, Ray P, Devadas AK, Raj PM, Ramesh SV, Ramesh MK, Rajasekaran R. Holographic elysium of a 4D ophthalmic anatomical and pathological metaverse with extended reality/mixed reality. Indian J Ophthalmol 2022; 70:3116-3121. [PMID: 35918983 DOI: 10.4103/ijo.ijo_120_22] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Extended reality is one of the leading cutting-edge technologies, which has not yet fully set foot into the field of ophthalmology. The use of extended reality technology especially in ophthalmic education and counseling will revolutionize the face of teaching and counseling on a whole new level. We have used this novel technology and have created a holographic museum of various anatomical structures such as the eyeball, cerebral venous system, cerebral arterial system, cranial nerves, and various parts of the brain in fine detail. These four-dimensional (4D) ophthalmic holograms created by us (patent pending) are cost-effectively constructed with TrueColor confocal images to serve as a new-age immersive 4D pedagogical and counseling tool for gameful learning and counseling, respectively. According to our knowledge, this concept has not been reported in the literature before.
Collapse
Affiliation(s)
- Prasanna V Ramesh
- Medical Officer, Department of Glaucoma and Research, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Tensingh Joshua
- Head of the Department, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Prajnya Ray
- Consultant Optometrist, Department of Optometry and Visual Science, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Aji K Devadas
- Consultant Optometrist, Department of Optometry and Visual Science, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Pragash M Raj
- Multimedia Consultant, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Shruthy V Ramesh
- Medical Officer, Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Meena K Ramesh
- Head of the Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Ramesh Rajasekaran
- Chief Medical Officer, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| |
Collapse
|
18
|
Tan TF, Li Y, Lim JS, Gunasekeran DV, Teo ZL, Ng WY, Ting DS. Metaverse and Virtual Health Care in Ophthalmology: Opportunities and Challenges. Asia Pac J Ophthalmol (Phila) 2022; 11:237-246. [PMID: 35772084 DOI: 10.1097/apo.0000000000000537] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
ABSTRACT The outbreak of the coronavirus disease 2019 has further increased the urgent need for digital transformation within the health care settings, with the use of artificial intelligence/deep learning, internet of things, telecommunication network/virtual platform, and blockchain. The recent advent of metaverse, an interconnected online universe, with the synergistic combination of augmented, virtual, and mixed reality described several years ago, presents a new era of immersive and real-time experiences to enhance human-to-human social interaction and connection. In health care and ophthalmology, the creation of virtual environment with three-dimensional (3D) space and avatar, could be particularly useful in patient-fronting platforms (eg, telemedicine platforms), operational uses (eg, meeting organization), digital education (eg, simulated medical and surgical education), diagnostics, and therapeutics. On the other hand, the implementation and adoption of these emerging virtual health care technologies will require multipronged approaches to ensure interoperability with real-world virtual clinical settings, user-friendliness of the technologies and clinical efficiencies while complying to the clinical, health economics, regulatory, and cybersecurity standards. To serve the urgent need, it is important for the eye community to continue to innovate, invent, adapt, and harness the unique abilities of virtual health care technology to provide better eye care worldwide.
Collapse
Affiliation(s)
- Ting Fang Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Yong Li
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Jane Sujuan Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | | | - Zhen Ling Teo
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Daniel Sw Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
19
|
Zheng C, Ye H, Yang J, Fei P, Qiu Y, Xie X, Wang Z, Chen J, Zhao P. Development and Clinical Validation of Semi-Supervised Generative Adversarial Networks for Detection of Retinal Disorders in Optical Coherence Tomography Images Using Small Dataset. Asia Pac J Ophthalmol (Phila) 2022; 11:219-226. [PMID: 35342179 DOI: 10.1097/apo.0000000000000498] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
PURPOSE To develop and test semi-supervised generative adversarial networks (GANs) that detect retinal disorders on optical coherence tomography (OCT) images using a small-labeled dataset. METHODS From a public database, we randomly chose a small supervised dataset with 400 OCT images (100 choroidal neovascularization, 100 diabetic macular edema, 100 drusen, and 100 normal) and assigned all other OCT images to unsupervised dataset (107,912 images without labeling). We adopted a semi-supervised GAN and a supervised deep learning (DL) model for automatically detecting retinal disorders from OCT images. The performance of the 2 models was compared in 3 testing datasets with different OCT devices. The evaluation metrics included accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curves. RESULTS The local validation dataset included 1000 images with 250 from each category. The independent clinical dataset included 366 OCT images using Cirrus OCT Shanghai Shibei Hospital and 511 OCT images using RTVue OCT from Xinhua Hospital respectively. The semi-supervised GANs classifier achieved better accuracy than supervised DL model (0.91 vs 0.86 for local cell validation dataset, 0.91 vs 0.86 in the Shanghai Shibei Hospital testing dataset, and 0.93 vs 0.92 in Xinhua Hospital testing dataset). For detecting urgent referrals (choroidal neo-vascularization and diabetic macular edema) from nonurgent referrals (drusen and normal) on OCT images, the semi-supervised GANs classifier also achieved better area under the receiver operating characteristic curves than supervised DL model (0.99 vs 0.97, 0.97 vs 0.96, and 0.99 vs 0.99, respectively). CONCLUSIONS A semi-supervised GAN can achieve better performance than that of a supervised DL model when the labeled dataset is limited. The current study offers utility to various research and clinical studies using DL with relatively small datasets. Semi-supervised GANs can detect retinal disorders from OCT images using relatively small dataset.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Hongfei Ye
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Jianlong Yang
- Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ping Fei
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Yingping Qiu
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Zilei Wang
- Shanghai Children's Hospital, Shanghai, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital, Shanghai, China
| | - Peiquan Zhao
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| |
Collapse
|
20
|
Ma MKI, Saha C, Poon SHL, Yiu RSW, Shih KC, Chan YK. Virtual Reality and Augmented Reality- Emerging Screening and Diagnostic Techniques in Ophthalmology: a Systematic Review. Surv Ophthalmol 2022; 67:1516-1530. [PMID: 35181279 DOI: 10.1016/j.survophthal.2022.02.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 02/07/2022] [Accepted: 02/11/2022] [Indexed: 11/24/2022]
Abstract
In healthcare, virtual reality (VR) and augmented reality (AR) have been applied extensively for many purposes. Similar to other technologies such as telemedicine and artificial intelligence, VR and AR may improve clinical diagnosis and screening services in ophthalmology by alleviating current problems, including workforce shortage, diagnostic error, and underdiagnosis. In the past decade a number of studies and products have used VR and AR concepts to build clinical tests for ophthalmology, but comprehensive reviews on these studies are limited. Therefore, we conducted a systematic review on the use of VR and AR as a diagnostic and screening tool in ophthalmology. We identified 26 studies that implemented a variety of VR and AR tests on different conditions, including VR cover tests for binocular vision disorder, VR perimetry for glaucoma, and AR slit lamp biomicroscopy for retinal diseases. In general, while VR and AR tools can become standardized, automated, and cost-effective tests with good user experience, several weaknesses, including unsatisfactory accuracy, weak validation, and hardware limitations, have prevented these VR and AR tools from having wider clinical application. Also, a comparison between VR and AR is made to explain why studies have predominantly used VR rather than AR.
Collapse
Affiliation(s)
| | - Chinmoy Saha
- Department of Ophthalmology, Li Ka Shing Faculty of Medicine, University of Hong Kong
| | | | | | - Kendrick Co Shih
- Department of Ophthalmology, Li Ka Shing Faculty of Medicine, University of Hong Kong
| | - Yau Kei Chan
- Department of Ophthalmology, Li Ka Shing Faculty of Medicine, University of Hong Kong.
| |
Collapse
|
21
|
Ramesh PV, Aji K, Joshua T, Ramesh SV, Ray P, Raj PM, Ramesh MK, Rajasekaran R. Immersive photoreal new-age innovative gameful pedagogy for e-ophthalmology with 3D augmented reality. Indian J Ophthalmol 2021; 70:275-280. [PMID: 34937254 PMCID: PMC8917591 DOI: 10.4103/ijo.ijo_2133_21] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Augmented reality (AR) has come a long way from a science-fiction concept to a science-based reality. AR is a view of the real, physical world in which the elements are enhanced by computer-generated inputs. AR is available on mobile handsets, which constitutes an essential e-learning platform. Today, AR is a real technology and not a science-fiction concept. The use of an e-ophthalmology platform with AR will pave the pathway for new-age gameful pedagogy. In this manuscript, we present a newly innovated AR program named “Eye MG AR” to simplify ophthalmic concept learning and to serve as a new-age immersive 3D pedagogical tool for gameful learning.
Collapse
Affiliation(s)
- Prasanna V Ramesh
- Medical Officer, Department of Glaucoma and Research, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - K Aji
- Optometrist, Department of Optometry and Visual Science, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Tensingh Joshua
- Head of the Department and 3D Generalist, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Shruthy V Ramesh
- Medical Officer, Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Prajnya Ray
- Optometrist, Department of Optometry and Visual Science, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Pragash M Raj
- Consultant, Mahathma Centre of Moving Images, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Meena K Ramesh
- Head of the Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Ramesh Rajasekaran
- Chief Medical Officer, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| |
Collapse
|