1
|
Ma G, McCloud M, Tian Y, Narawane A, Shi H, Trout R, McNabb RP, Kuo AN, Draelos M. Robotics and optical coherence tomography: current works and future perspectives [Invited]. BIOMEDICAL OPTICS EXPRESS 2025; 16:578-602. [PMID: 39958851 PMCID: PMC11828438 DOI: 10.1364/boe.547943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 12/29/2024] [Accepted: 01/01/2025] [Indexed: 02/18/2025]
Abstract
Optical coherence tomography (OCT) is an interferometric technique for micron-level imaging in biological and non-biological contexts. As a non-invasive, non-ionizing, and video-rate imaging modality, OCT is widely used in biomedical and clinical applications, especially ophthalmology, where it functions in many roles, including tissue mapping, disease diagnosis, and intrasurgical visualization. In recent years, the rapid growth of medical robotics has led to new applications for OCT, primarily for 3D free-space scanning, volumetric perception, and novel optical designs for specialized medical applications. This review paper surveys these recent developments at the intersection of OCT and robotics and organizes them by degree of integration and application, with a focus on biomedical and clinical topics. We conclude with perspectives on how these recent innovations may lead to further advances in imaging and medical technology.
Collapse
Affiliation(s)
- Guangshen Ma
- Department of Robotics, University of Michigan Ann Arbor, MI 48105, USA
| | - Morgan McCloud
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Yuan Tian
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Amit Narawane
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Harvey Shi
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Robert Trout
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Ryan P McNabb
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27705, USA
| | - Anthony N Kuo
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27705, USA
| | - Mark Draelos
- Department of Robotics, University of Michigan Ann Arbor, MI 48105, USA
- Department of Ophthalmology and Visual Sciences, University of Michigan Medical School, Ann Arbor, MI 48105, USA
| |
Collapse
|
2
|
Dey R, Guo Y, Liu Y, Puri A, Savastano L, Zheng Y. An intuitive guidewire control mechanism for robotic intervention. Int J Comput Assist Radiol Surg 2025; 20:333-344. [PMID: 39370493 DOI: 10.1007/s11548-024-03279-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Accepted: 09/24/2024] [Indexed: 10/08/2024]
Abstract
PURPOSE Teleoperated Interventional Robotic systems (TIRs) are developed to reduce radiation exposure and physical stress of the physicians and enhance device manipulation accuracy and stability. Nevertheless, TIRs are not widely adopted, partly due to the lack of intuitive control interfaces. Current TIR interfaces like joysticks, keyboards, and touchscreens differ significantly from traditional manual techniques, resulting in a shallow, longer learning curve. To this end, this research introduces a novel control mechanism for intuitive operation and seamless adoption of TIRs. METHODS An off-the-shelf medical torque device augmented with a micro-electromagnetic tracker was proposed as the control interface to preserve the tactile sensation and muscle memory integral to interventionalists' proficiency. The control inputs to drive the TIR were extracted via real-time motion mapping of the interface. To verify the efficacy of the proposed control mechanism to accurately operate the TIR, evaluation experiments using industrial grade encoders were conducted. RESULTS A mean tracking error of 0.32 ± 0.12 mm in linear and 0.54 ± 0.07° in angular direction were achieved. The time lag in tracking was found to be 125 ms on average using pade approximation. Ergonomically, the developed control interface is 3.5 mm diametrically larger, and 4.5 g. heavier compared to traditional torque devices. CONCLUSION With uncanny resemblance to traditional torque devices while maintaining results comparable to state-of-the-art commercially available TIRs, this research successfully provides an intuitive control interface for potential wider clinical adoption of robot-assisted interventions.
Collapse
Affiliation(s)
- Rohit Dey
- Mechanical and Materials Engineering, Worcester Polytechnic Institute, Worcester, MA, USA.
| | - Yichen Guo
- Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, USA
| | - Yang Liu
- Global Institute of Future Technology, Shanghai Jiao Tong University, Shanghai, China
| | - Ajit Puri
- Radiology, UMass Chan Medical School, Worcester, MA, USA
| | - Luis Savastano
- Neurological Surgery, University of California School of Medicine, San Francisco, CA, USA
| | - Yihao Zheng
- Mechanical and Materials Engineering, Worcester Polytechnic Institute, Worcester, MA, USA
- Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, USA
- Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, USA
| |
Collapse
|
3
|
Oshika T. Artificial Intelligence Applications in Ophthalmology. JMA J 2025; 8:66-75. [PMID: 39926073 PMCID: PMC11799668 DOI: 10.31662/jmaj.2024-0139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 07/11/2024] [Indexed: 02/11/2025] Open
Abstract
Ophthalmology is well suited for the integration of artificial intelligence (AI) owing to its reliance on various imaging modalities, such as anterior segment photography, fundus photography, and optical coherence tomography (OCT), which generate large volumes of high-resolution digital images. These images provide rich datasets for training AI algorithms, which enables precise diagnosis and monitoring of various ocular conditions. Retinal disease management heavily relies on image recognition. Limited access to ophthalmologists in underdeveloped areas and high image volumes in developed countries make AI a promising, cost-effective solution for screening and diagnosis. In corneal diseases, differential diagnosis is critical yet challenging because of the wide range of potential etiologies. AI and diagnostic technologies offer promise for improving the accuracy and speed of these diagnoses, including the differentiation between infectious and noninfectious conditions. Smartphone imaging coupled with AI technology can advance the diagnosis of anterior segment diseases, democratizing access to eye care and providing rapid and reliable diagnostic results. Other potential areas for AI applications include cataract and vitreous surgeries as well as the use of generative AI in training ophthalmologists. While AI offers substantial benefits, challenges remain, including the need for high-quality images, accurate manual annotations, patient heterogeneity considerations, and the "black-box phenomenon". Addressing these issues is crucial for enhancing the effectiveness of AI and ensuring its successful integration into clinical practice. AI is poised to transform ophthalmology by increasing diagnostic accuracy, optimizing treatment strategies, and improving patient care, particularly in high-risk or underserved populations.
Collapse
Affiliation(s)
- Tetsuro Oshika
- Department of Ophthalmology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
4
|
Zhang H, Yang J. Speckle decorrelation rate as a robust indicator for visualizing the therapeutic thermal field with OCT. OPTICS LETTERS 2024; 49:6217-6220. [PMID: 39485451 DOI: 10.1364/ol.538862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 10/14/2024] [Indexed: 11/03/2024]
Abstract
Optical coherence tomography (OCT) is evolving from a diagnostic imaging modality to one that also facilitates therapeutic procedures. However, visualizing the therapeutic thermal field during minimally invasive thermal treatments such as laser or radio frequency ablation is challenging. This difficulty arises because tissues show minimal optical property changes until they reach the coagulation threshold at approximately 50 ∘C. To address this, we introduce the speckle decorrelation rate as a new, to our knowledge, contrast mechanism for OCT, enhancing the visualization of the therapeutic thermal field. Through ex vivo tissue experiments on a laser ablation-OCT surveillance system, we demonstrate that the speckle decorrelation rate offers superior sensitivity to detect subtle temperature changes and is less sensitive to the selection of time intervals for decorrelation calculations compared to existing speckle decorrelation methods. Our approach, which is label-free and compatible with various OCT systems, has been validated across diverse biological tissues, showing potential to augment the precision and safety of thermal therapies. Additionally, we propose a GPU-accelerated pipeline to expedite processing time, making real-time thermal field visualization feasible.
Collapse
|
5
|
Wang S, He X, Jian Z, Li J, Xu C, Chen Y, Liu Y, Chen H, Huang C, Hu J, Liu Z. Advances and prospects of multi-modal ophthalmic artificial intelligence based on deep learning: a review. EYE AND VISION (LONDON, ENGLAND) 2024; 11:38. [PMID: 39350240 PMCID: PMC11443922 DOI: 10.1186/s40662-024-00405-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 09/02/2024] [Indexed: 10/04/2024]
Abstract
BACKGROUND In recent years, ophthalmology has emerged as a new frontier in medical artificial intelligence (AI) with multi-modal AI in ophthalmology garnering significant attention across interdisciplinary research. This integration of various types and data models holds paramount importance as it enables the provision of detailed and precise information for diagnosing eye and vision diseases. By leveraging multi-modal ophthalmology AI techniques, clinicians can enhance the accuracy and efficiency of diagnoses, and thus reduce the risks associated with misdiagnosis and oversight while also enabling more precise management of eye and vision health. However, the widespread adoption of multi-modal ophthalmology poses significant challenges. MAIN TEXT In this review, we first summarize comprehensively the concept of modalities in the field of ophthalmology, the forms of fusion between modalities, and the progress of multi-modal ophthalmic AI technology. Finally, we discuss the challenges of current multi-modal AI technology applications in ophthalmology and future feasible research directions. CONCLUSION In the field of ophthalmic AI, evidence suggests that when utilizing multi-modal data, deep learning-based multi-modal AI technology exhibits excellent diagnostic efficacy in assisting the diagnosis of various ophthalmic diseases. Particularly, in the current era marked by the proliferation of large-scale models, multi-modal techniques represent the most promising and advantageous solution for addressing the diagnosis of various ophthalmic diseases from a comprehensive perspective. However, it must be acknowledged that there are still numerous challenges associated with the application of multi-modal techniques in ophthalmic AI before they can be effectively employed in the clinical setting.
Collapse
Affiliation(s)
- Shaopan Wang
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China
- School of Informatics, Xiamen University, Xiamen, Fujian, China
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Xin He
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
- Department of Ophthalmology, the First Affiliated Hospital of Xiamen University, Xiamen University, Xiamen, Fujian, China
| | - Zhongquan Jian
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China
- School of Informatics, Xiamen University, Xiamen, Fujian, China
| | - Jie Li
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Changsheng Xu
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China
- School of Informatics, Xiamen University, Xiamen, Fujian, China
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Yuguang Chen
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China
- School of Informatics, Xiamen University, Xiamen, Fujian, China
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Yuwen Liu
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Han Chen
- Department of Ophthalmology, the First Affiliated Hospital of Xiamen University, Xiamen University, Xiamen, Fujian, China
| | - Caihong Huang
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China
| | - Jiaoyue Hu
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China.
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China.
| | - Zuguo Liu
- Institute of Artificial Intelligence, Xiamen University, Xiamen, Fujian, China.
- Xiamen University Affiliated Xiamen Eye Center, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Fujian Engineering and Research Center of Eye Regenerative Medicine, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Chengyi Building, 4Th Floor, 4221-122, South Xiang'an Rd, Xiamen, 361005, Fujian, China.
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China.
| |
Collapse
|
6
|
Deng J, Qin Y. Current Status, Hotspots, and Prospects of Artificial Intelligence in Ophthalmology: A Bibliometric Analysis (2003-2023). Ophthalmic Epidemiol 2024:1-14. [PMID: 39146462 DOI: 10.1080/09286586.2024.2373956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Revised: 06/01/2024] [Accepted: 06/18/2024] [Indexed: 08/17/2024]
Abstract
PURPOSE Artificial intelligence (AI) has gained significant attention in ophthalmology. This paper reviews, classifies, and summarizes the research literature in this field and aims to provide readers with a detailed understanding of the current status and future directions, laying a solid foundation for further research and decision-making. METHODS Literature was retrieved from the Web of Science database. Bibliometric analysis was performed using VOSviewer, CiteSpace, and the R package Bibliometrix. RESULTS The study included 3,377 publications from 4,035 institutions in 98 countries. China and the United States had the most publications. Sun Yat-sen University is a leading institution. Translational Vision Science & Technology"published the most articles, while "Ophthalmology" had the most co-citations. Among 13,145 researchers, Ting DSW had the most publications and citations. Keywords included "Deep learning," "Diabetic retinopathy," "Machine learning," and others. CONCLUSION The study highlights the promising prospects of AI in ophthalmology. Automated eye disease screening, particularly its core technology of retinal image segmentation and recognition, has become a research hotspot. AI is also expanding to complex areas like surgical assistance, predictive models. Multimodal AI, Generative Adversarial Networks, and ChatGPT have driven further technological innovation. However, implementing AI in ophthalmology also faces many challenges, including technical, regulatory, and ethical issues, and others. As these challenges are overcome, we anticipate more innovative applications, paving the way for more effective and safer eye disease treatments.
Collapse
Affiliation(s)
- Jie Deng
- First Clinical College of Traditional Chinese Medicine, Hunan University of Chinese Medicine, Changsha, Hunan, China
- Graduate School, Hunan University of Chinese Medicine, Changsha, Hunan, China
| | - YuHui Qin
- First Clinical College of Traditional Chinese Medicine, Hunan University of Chinese Medicine, Changsha, Hunan, China
- Graduate School, Hunan University of Chinese Medicine, Changsha, Hunan, China
| |
Collapse
|
7
|
Poh SSJ, Sia JT, Yip MYT, Tsai ASH, Lee SY, Tan GSW, Weng CY, Kadonosono K, Kim M, Yonekawa Y, Ho AC, Toth CA, Ting DSW. Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases. Ophthalmol Retina 2024; 8:633-645. [PMID: 38280425 DOI: 10.1016/j.oret.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. BACKGROUND Technological advancements in imaging enhance both preoperative and intraoperative management of surgical VR diseases. Widefield imaging in fundal photography and OCT can improve assessment of peripheral retinal disorders such as retinal detachments, degeneration, and tumors. OCT angiography provides a rapid and noninvasive imaging of the retinal and choroidal vasculature. Surgical visualization has also improved with intraoperative OCT providing a detailed real-time assessment of retinal layers to guide surgical decisions. Heads-up display and head-mounted display utilize 3-dimensional technology to provide surgeons with enhanced visual guidance and improved ergonomics during surgery. Intraocular robotics technology allows for greater surgical precision and is shown to be useful in retinal vein cannulation and subretinal drug delivery. In addition, deep learning techniques leverage on diverse data including widefield retinal photography and OCT for better predictive accuracy in classification, segmentation, and prognostication of many surgical VR diseases. CONCLUSION This review article summarized the latest updates in these areas and highlights the importance of continuous innovation and improvement in technology within the field. These advancements have the potential to reshape management of surgical VR diseases in the very near future and to ultimately improve patient care. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Stanley S J Poh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Josh T Sia
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Michelle Y T Yip
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Andrew S H Tsai
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shu Yen Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Christina Y Weng
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas
| | | | - Min Kim
- Department of Ophthalmology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Allen C Ho
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Cynthia A Toth
- Departments of Ophthalmology and Biomedical Engineering, Duke University, Durham, North Carolina
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, California.
| |
Collapse
|
8
|
Tao Q, Liu J, Zheng Y, Yang Y, Lin C, Guang C. Evaluation of an Active Disturbance Rejection Controller for Ophthalmic Robots with Piezo-Driven Injector. MICROMACHINES 2024; 15:833. [PMID: 39064342 PMCID: PMC11278564 DOI: 10.3390/mi15070833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 06/24/2024] [Accepted: 06/25/2024] [Indexed: 07/28/2024]
Abstract
Retinal vein cannulation involves puncturing an occluded vessel on the micron scale. Even single millinewton force can cause permanent damage. An ophthalmic robot with a piezo-driven injector is precise enough to perform this delicate procedure, but the uncertain viscoelastic characteristics of the vessel make it difficult to achieve the desired contact force without harming the retina. The paper utilizes a viscoelastic contact model to explain the mechanical characteristics of retinal blood vessels to address this issue. The uncertainty in the viscoelastic properties is considered an internal disturbance of the contact model, and an active disturbance rejection controller is then proposed to precisely control the contact force. The experimental results show that this method can precisely adjust the contact force at the millinewton level even when the viscoelastic parameters vary significantly (up to 403.8%). The root mean square (RMS) and maximum value of steady-state error are 0.32 mN and 0.41 mN. The response time is below 2.51 s with no obvious overshoot.
Collapse
Affiliation(s)
- Qiannan Tao
- School of Energy and Power Engineering, Beihang University, Beijing 100191, China;
| | - Jianjun Liu
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China; (J.L.); (C.L.)
| | - Yu Zheng
- College of Automation and College of Artificial Intelligence, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| | - Yang Yang
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China; (J.L.); (C.L.)
| | - Chuang Lin
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China; (J.L.); (C.L.)
| | - Chenhan Guang
- School of Mechanical and Materials Engineering, North China University of Technology, Beijing 100144, China;
| |
Collapse
|
9
|
Wang Y, Wei S, Zuo R, Kam M, Opfermann JD, Sunmola I, Hsieh MH, Krieger A, Kang JU. Automatic and real-time tissue sensing for autonomous intestinal anastomosis using hybrid MLP-DC-CNN classifier-based optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:2543-2560. [PMID: 38633079 PMCID: PMC11019703 DOI: 10.1364/boe.521652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/18/2024] [Accepted: 03/18/2024] [Indexed: 04/19/2024]
Abstract
Anastomosis is a common and critical part of reconstructive procedures within gastrointestinal, urologic, and gynecologic surgery. The use of autonomous surgical robots such as the smart tissue autonomous robot (STAR) system demonstrates an improved efficiency and consistency of the laparoscopic small bowel anastomosis over the current da Vinci surgical system. However, the STAR workflow requires auxiliary manual monitoring during the suturing procedure to avoid missed or wrong stitches. To eliminate this monitoring task from the operators, we integrated an optical coherence tomography (OCT) fiber sensor with the suture tool and developed an automatic tissue classification algorithm for detecting missed or wrong stitches in real time. The classification results were updated and sent to the control loop of STAR robot in real time. The suture tool was guided to approach the object by a dual-camera system. If the tissue inside the tool jaw was inconsistent with the desired suture pattern, a warning message would be generated. The proposed hybrid multilayer perceptron dual-channel convolutional neural network (MLP-DC-CNN) classification platform can automatically classify eight different abdominal tissue types that require different suture strategies for anastomosis. In MLP, numerous handcrafted features (∼1955) were utilized including optical properties and morphological features of one-dimensional (1D) OCT A-line signals. In DC-CNN, intensity-based features and depth-resolved tissues' attenuation coefficients were fully exploited. A decision fusion technique was applied to leverage the information collected from both classifiers to further increase the accuracy. The algorithm was evaluated on 69,773 testing A-line data. The results showed that our model can classify the 1D OCT signals of small bowels in real time with an accuracy of 90.06%, a precision of 88.34%, and a sensitivity of 87.29%, respectively. The refresh rate of the displayed A-line signals was set as 300 Hz, the maximum sensing depth of the fiber was 3.6 mm, and the running time of the image processing algorithm was ∼1.56 s for 1,024 A-lines. The proposed fully automated tissue sensing model outperformed the single classifier of CNN, MLP, or SVM with optimized architectures, showing the complementarity of different feature sets and network architectures in classifying intestinal OCT A-line signals. It can potentially reduce the manual involvement of robotic laparoscopic surgery, which is a crucial step towards a fully autonomous STAR system.
Collapse
Affiliation(s)
- Yaning Wang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Ruizhi Zuo
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael Kam
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Justin D. Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Idris Sunmola
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael H. Hsieh
- Division of Urology, Children’s National Hospital, 111 Michigan Ave NW, Washington, D.C. 20010, USA
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Jin U. Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| |
Collapse
|
10
|
Ma R, Tao Y, Khodeiry MM, Liu X, Mendoza X, Liu Y, Shyu ML, Lee RK. An Extensive-Form Game Paradigm for Visual Field Testing via Deep Reinforcement Learning. IEEE Trans Biomed Eng 2024; 71:514-523. [PMID: 37616138 DOI: 10.1109/tbme.2023.3308475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Glaucoma is the leading cause of irreversible but preventable blindness worldwide, and visual field testing is an important tool for its diagnosis and monitoring. Testing using standard visual field thresholding procedures is time-consuming, and prolonged test duration leads to patient fatigue and decreased test reliability. Different visual field testing algorithms have been developed to shorten testing time while maintaining accuracy. However, the performance of these algorithms depends heavily on prior knowledge and manually crafted rules that determine the intensity of each light stimulus as well as the termination criteria, which is suboptimal. We leverage deep reinforcement learning to find improved decision strategies for visual field testing. In our proposed algorithms, multiple intelligent agents are employed to interact with the patient in an extensive-form game fashion, with each agent controlling the test on one of the testing locations in the patient's visual field. Through training, each agent learns an optimized policy that determines the intensities of light stimuli and the termination criteria, which minimizes the error in sensitivity estimation and test duration at the same time. In simulation experiments, we compare the performance of our algorithms against baseline visual field testing algorithms and show that our algorithms achieve a better trade-off between estimation accuracy and test duration. By retaining testing accuracy with reduced test duration, our algorithms improve test reliability, clinic efficiency, and patient satisfaction, and translationally affect clinical outcomes.
Collapse
|
11
|
Feng X, Zhang X, Shi X, Li L, Wang S. ST-ITEF: Spatio-Temporal Intraoperative Task Estimating Framework to recognize surgical phase and predict instrument path based on multi-object tracking in keratoplasty. Med Image Anal 2024; 91:103026. [PMID: 37976868 DOI: 10.1016/j.media.2023.103026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 08/22/2023] [Accepted: 11/03/2023] [Indexed: 11/19/2023]
Abstract
Computer-assisted cognition guidance for surgical robotics by computer vision is a potential future outcome, which could facilitate the surgery for both operation accuracy and autonomy level. In this paper, multiple-object segmentation and feature extraction from this segmentation are combined to determine and predict surgical manipulation. A novel three-stage Spatio-Temporal Intraoperative Task Estimating Framework is proposed, with a quantitative expression derived from ophthalmologists' visual information process and also with the multi-object tracking of surgical instruments and human corneas involved in keratoplasty. In the estimation of intraoperative workflow, quantifying the operation parameters is still an open challenge. This problem is tackled by extracting key geometric properties from multi-object segmentation and calculating the relative position among instruments and corneas. A decision framework is further proposed, based on prior geometric properties, to recognize the current surgical phase and predict the instrument path for each phase. Our framework is tested and evaluated by real human keratoplasty videos. The optimized DeepLabV3 with image filtration won the competitive class-IoU in the segmentation task and the mean phase jaccard reached 55.58 % for the phase recognition. Both the qualitative and quantitative results indicate that our framework can achieve accurate segmentation and surgical phase recognition under complex disturbance. The Intraoperative Task Estimating Framework would be highly potential to guide surgical robots in clinical practice.
Collapse
Affiliation(s)
- Xiaojing Feng
- School of Mechanical Engineering at Xi'an Jiaotong University, 28 Xianning West Road, Xi'an 710049, China.
| | - Xiaodong Zhang
- School of Mechanical Engineering at Xi'an Jiaotong University, 28 Xianning West Road, Xi'an 710049, China.
| | - Xiaojun Shi
- School of Mechanical Engineering at Xi'an Jiaotong University, 28 Xianning West Road, Xi'an 710049, China
| | - Li Li
- Department of Ophthalmology at the First Affiliated Hospital of Xi'an Jiaotong University, 277 Yanta West Road, Xi'an 710061, China
| | - Shaopeng Wang
- School of Mechanical Engineering at Xi'an Jiaotong University, 28 Xianning West Road, Xi'an 710049, China
| |
Collapse
|
12
|
Fan Y, Liu S, Gao E, Guo R, Dong G, Li Y, Gao T, Tang X, Liao H. The LMIT: Light-mediated minimally-invasive theranostics in oncology. Theranostics 2024; 14:341-362. [PMID: 38164160 PMCID: PMC10750201 DOI: 10.7150/thno.87783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 10/18/2023] [Indexed: 01/03/2024] Open
Abstract
Minimally-invasive diagnosis and therapy have gradually become the trend and research hotspot of current medical applications. The integration of intraoperative diagnosis and treatment is a development important direction for real-time detection, minimally-invasive diagnosis and therapy to reduce mortality and improve the quality of life of patients, so called minimally-invasive theranostics (MIT). Light is an important theranostic tool for the treatment of cancerous tissues. Light-mediated minimally-invasive theranostics (LMIT) is a novel evolutionary technology that integrates diagnosis and therapeutics for the less invasive treatment of diseased tissues. Intelligent theranostics would promote precision surgery based on the optical characterization of cancerous tissues. Furthermore, MIT also requires the assistance of smart medical devices or robots. And, optical multimodality lay a solid foundation for intelligent MIT. In this review, we summarize the important state-of-the-arts of optical MIT or LMIT in oncology. Multimodal optical image-guided intelligent treatment is another focus. Intraoperative imaging and real-time analysis-guided optical treatment are also systemically discussed. Finally, the potential challenges and future perspectives of intelligent optical MIT are discussed.
Collapse
Affiliation(s)
- Yingwei Fan
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Shuai Liu
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Enze Gao
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Rui Guo
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Guozhao Dong
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Yangxi Li
- Dept. of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China, 100084
| | - Tianxin Gao
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Xiaoying Tang
- School of Medical Technology, Beijing Institute of Technology, Beijing, China, 100081
| | - Hongen Liao
- Dept. of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China, 100084
| |
Collapse
|
13
|
Nuliqiman M, Xu M, Sun Y, Cao J, Chen P, Gao Q, Xu P, Ye J. Artificial Intelligence in Ophthalmic Surgery: Current Applications and Expectations. Clin Ophthalmol 2023; 17:3499-3511. [PMID: 38026589 PMCID: PMC10674717 DOI: 10.2147/opth.s438127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/09/2023] [Indexed: 12/01/2023] Open
Abstract
Artificial Intelligence (AI) has found rapidly growing applications in ophthalmology, achieving robust recognition and classification in most kind of ocular diseases. Ophthalmic surgery is one of the most delicate microsurgery, requiring high fineness and stability of surgeons. The massive demand of the AI assist ophthalmic surgery will constitute an important factor in boosting accelerate precision medicine. In clinical practice, it is instrumental to update and review the considerable evidence of the current AI technologies utilized in the investigation of ophthalmic surgery involved in both the progression and innovation of precision medicine. Bibliographic databases including PubMed and Google Scholar were searched using keywords such as "ophthalmic surgery", "surgical selection", "candidate screening", and "robot-assisted surgery" to find articles about AI technology published from 2018 to 2023. In addition to the Editorials and letters to the editor, all types of approaches are considered. In this paper, we will provide an up-to-date review of artificial intelligence in eye surgery, with a specific focus on its application to candidate screening, surgery selection, postoperative prediction, and real-time intraoperative guidance.
Collapse
Affiliation(s)
- Maimaiti Nuliqiman
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Mingyu Xu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Yiming Sun
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Jing Cao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Pengjie Chen
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Qi Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Peifang Xu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, People’s Republic of China
| |
Collapse
|
14
|
Wang T, Li H, Pu T, Yang L. Microsurgery Robots: Applications, Design, and Development. SENSORS (BASEL, SWITZERLAND) 2023; 23:8503. [PMID: 37896597 PMCID: PMC10611418 DOI: 10.3390/s23208503] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Collapse
Affiliation(s)
- Tiexin Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haoyu Li
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Tanhong Pu
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
15
|
Pan-Doh N, Sikder S, Woreta FA, Handa JT. Using the language of surgery to enhance ophthalmology surgical education. Surg Open Sci 2023; 14:52-59. [PMID: 37528917 PMCID: PMC10387608 DOI: 10.1016/j.sopen.2023.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 07/09/2023] [Indexed: 08/03/2023] Open
Abstract
Background Currently, surgical education utilizes a combination of the apprentice model, wet-lab training, and simulation, but due to reliance on subjective data, the quality of teaching and assessment can be variable. The "language of surgery," an established concept in engineering literature whose incorporation into surgical education has been limited, is defined as the description of each surgical maneuver using quantifiable metrics. This concept is different from the traditional notion of surgical language, generally thought of as the qualitative definitions and terminology used by surgeons. Methods A literature search was conducted through April 2023 using MEDLINE/PubMed using search terms to investigate wet-lab, virtual simulators, and robotics in ophthalmology, along with the language of surgery and surgical education. Articles published before 2005 were mostly excluded, although a few were included on a case-by-case basis. Results Surgical maneuvers can be quantified by leveraging technological advances in virtual simulators, video recordings, and surgical robots to create a language of surgery. By measuring and describing maneuver metrics, the learning surgeon can adjust surgical movements in an appropriately graded fashion that is based on objective and standardized data. The main contribution is outlining a structured education framework that details how surgical education could be improved by incorporating the language of surgery, using ophthalmology surgical education as an example. Conclusion By describing each surgical maneuver in quantifiable, objective, and standardized terminology, a language of surgery can be created that can be used to learn, teach, and assess surgical technical skill with an approach that minimizes bias. Key message The "language of surgery," defined as the quantification of each surgical movement's characteristics, is an established concept in the engineering literature. Using ophthalmology surgical education as an example, we describe a structured education framework based on the language of surgery to improve surgical education. Classifications Surgical education, robotic surgery, ophthalmology, education standardization, computerized assessment, simulations in teaching. Competencies Practice-Based Learning and Improvement.
Collapse
Affiliation(s)
- Nathan Pan-Doh
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Shameema Sikder
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Fasika A. Woreta
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - James T. Handa
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
16
|
Ebrahimi A, Sefati S, Gehlbach P, Taylor RH, Iordachita I. Simultaneous Online Registration-Independent Stiffness Identification and Tip Localization of Surgical Instruments in Robot-assisted Eye Surgery. IEEE T ROBOT 2023; 39:1373-1387. [PMID: 37377922 PMCID: PMC10292740 DOI: 10.1109/tro.2022.3201393] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing of surgery states (e.g. instrument tip localization and tool-to-tissue interaction forces). Many of the existing tool tip localization methods require preoperative frame registrations or instrument calibrations. In this study using an iterative approach and by combining vision and force-based methods, we develop calibration- and registration-independent (RI) algorithms to provide online estimates of instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model based on the forward kinematics (FWK) of the Steady-Hand Eye Robot (SHER) and Fiber Brag Grating (FBG) sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the deflected instrument tip position estimations during robot-assisted eye surgery. The conducted experiments demonstrate that when the online RI stiffness estimations are used, the instrument tip localization results surpass those obtained from pre-operative offline calibrations for stiffness.
Collapse
Affiliation(s)
- Ali Ebrahimi
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Shahriar Sefati
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, 21287, USA
| | - Russell H Taylor
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Computer Science and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Iulian Iordachita
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| |
Collapse
|
17
|
Tian Y, Draelos M, McNabb RP, Hauser K, Kuo AN, Izatt JA. Optical coherence tomography refraction and optical path length correction for image-guided corneal surgery. BIOMEDICAL OPTICS EXPRESS 2022; 13:5035-5049. [PMID: 36187253 PMCID: PMC9484446 DOI: 10.1364/boe.464762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 08/05/2022] [Accepted: 08/21/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) may be useful for guidance of ocular microsurgeries such as deep anterior lamellar keratoplasty (DALK), a form of corneal transplantation that requires delicate insertion of a needle into the stroma to approximately 90% of the corneal thickness. However, visualization of the true shape of the cornea and the surgical tool during surgery is impaired in raw OCT volumes due to both light refraction at the corneal boundaries, as well as geometrical optical path length distortion due to the group velocity of broadband OCT light in tissue. Therefore, uncorrected B-scans or volumes may not provide an accurate visualization suitable for reliable surgical guidance. In this article, we introduce a method to correct for both refraction and optical path length distortion in 3D in order to reconstruct corrected OCT B-scans in both natural corneas and corneas deformed by needle insertion. We delineate the separate roles of phase and group index in OCT image distortion correction, and introduce a method to estimate the phase index from the group index which is readily measured in samples. Using the measured group index and estimated phase index of human corneas at 1060 nm, we demonstrate quantitatively accurate geometric reconstructions of the true cornea and inserted needle shape during simulated DALK surgeries.
Collapse
Affiliation(s)
- Yuan Tian
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Mark Draelos
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Ryan P. McNabb
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Kris Hauser
- Department of Computer Science, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| | - Anthony N. Kuo
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| | - Joseph A. Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
18
|
Hao Q, Xu F, Chen L, Hui P, Li Y. Hierarchical Multi-agent Model for Reinforced MedicalResource Allocation with Imperfect Information. ACM T INTEL SYST TEC 2022. [DOI: 10.1145/3552436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
Facing the outbreak of COVID-19, shortage in medical resources becomes increasingly outstanding. Therefore, efficient strategies for medical resource allocation are urgently called for. However, conventional rule-based methods from public health experts have limited capability in dealing with the complex and dynamic pandemic spreading situation. Besides, model-based optimization methods such as dynamic programming (DP) fails to work since we cannot obtain a precise model in the real-world situation most of the time. On the other hand, model-free reinforcement learning (RL) is powerful for decision making, but three key challenges exist in solving this problem via RL: (1) complex situation and countless choices for decision making in the real world; (2) only imperfect information are available due to the latency of pandemic spreading; (3) limitations on conducting experiments in real-world since we cannot set pandemic outbreaks arbitrarily. In this paper, we propose a hierarchical reinforcement learning framework with several specially designed components. We design a decomposed action space with a corresponding training algorithm to deal with the countless choices and ensure efficient and real time strategies. We design an recurrent neural network based framework to utilize the imperfect information obtained from the environment. We also design a multi-agents voting method, which modifies the decision making process considering the randomness during model training and thus improves the performance. We build a pandemic spreading simulator based on real world data, serving as the experimental platform. We conduct extensive experiments and the results show that our method outperforms all the baselines, which reduces infections and deaths by 14.25% on average without the multi-agents voting method and up to 15.44% with it.
Collapse
Affiliation(s)
- Qianyue Hao
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University
| | - Fengli Xu
- Knowledge Lab, Department of Sociology, University of Chicago
| | - Lin Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology
| | - Pan Hui
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology
| | - Yong Li
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University
| |
Collapse
|
19
|
Muijzer MB, Heslinga FG, Couwenberg F, Noordmans HJ, Oahalou A, Pluim JPW, Veta M, Wisse RPL. Automatic evaluation of graft orientation during Descemet membrane endothelial keratoplasty using intraoperative OCT. BIOMEDICAL OPTICS EXPRESS 2022; 13:2683-2694. [PMID: 35774322 PMCID: PMC9203112 DOI: 10.1364/boe.446519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 12/31/2021] [Accepted: 02/24/2022] [Indexed: 05/27/2023]
Abstract
Correct Descemet Membrane Endothelial Keratoplasty (DMEK) graft orientation is imperative for success of DMEK surgery, but intraoperative evaluation can be challenging. We present a method for automatic evaluation of the graft orientation in intraoperative optical coherence tomography (iOCT), exploiting the natural rolling behavior of the graft. The method encompasses a deep learning model for graft segmentation, post-processing to obtain a smooth line representation, and curvature calculations to determine graft orientation. For an independent test set of 100 iOCT-frames, the automatic method correctly identified graft orientation in 78 frames and obtained an area under the receiver operating characteristic curve (AUC) of 0.84. When we replaced the automatic segmentation with the manual masks, the AUC increased to 0.92, corresponding to an accuracy of 86%. In comparison, two corneal specialists correctly identified graft orientation in 90% and 91% of the iOCT-frames.
Collapse
Affiliation(s)
- Marc B. Muijzer
- Utrecht Cornea Research Group, Ophthalmology Department, University Medical Center Utrecht, Heidelberglaan 100, 3584CX, Utrecht, The Netherlands
- Contributed equally
| | - Friso G. Heslinga
- Department of Biomedical Engineering, Eindhoven University of Technology, Postbus 513, 5600 MB, Eindhoven, The Netherlands
- Contributed equally
| | - Floor Couwenberg
- University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands
| | - Herke-Jan Noordmans
- Medical technical and Clinical physics department, University Medical Center Utrecht, Heidelberglaan 100, 3584CX, Utrecht, The Netherlands
| | | | - Josien P. W. Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, Postbus 513, 5600 MB, Eindhoven, The Netherlands
- Image Sciences Institute, University Medical Center Utrecht, Utrecht University, Heidelberglaan 100, 3584CX, Utrecht, The Netherlands
| | - Mitko Veta
- Department of Biomedical Engineering, Eindhoven University of Technology, Postbus 513, 5600 MB, Eindhoven, The Netherlands
- Contributed equally
| | - Robert P. L. Wisse
- Utrecht Cornea Research Group, Ophthalmology Department, University Medical Center Utrecht, Heidelberglaan 100, 3584CX, Utrecht, The Netherlands
- Contributed equally
| |
Collapse
|
20
|
Edwards W, Tang G, Tian Y, Draelos M, Izatt J, Kuo A, Hauser K. Data-Driven Modelling and Control for Robot Needle Insertion in Deep Anterior Lamellar Keratoplasty. IEEE Robot Autom Lett 2022; 7:1526-1533. [PMID: 37090091 PMCID: PMC10117280 DOI: 10.1109/lra.2022.3140458] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Deep anterior lamellar keratoplasty (DALK) is a technique for cornea transplantation which is associated with reduced patient morbidity. DALK has been explored as a potential application of robot microsurgery because the small scales, fine control requirements, and difficulty of visualization make it very challenging for human surgeons to perform. We address the problem of modelling the small scale interactions between the surgical tool and the cornea tissue to improve the accuracy of needle insertion, since accurate placement within 5% of target depth has been associated with more reliable clinical outcomes. We develop a data-driven autoregressive dynamic model of the tool-tissue interaction and a model predictive controller to guide robot needle insertion. In an ex vivo model, our controller significantly improves the accuracy of needle positioning by more than 40% compared to prior methods.
Collapse
Affiliation(s)
- William Edwards
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Gao Tang
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Yuan Tian
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Mark Draelos
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Joseph Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Anthony Kuo
- Department of Ophthalmology, Duke University, Durham, NC 27710, USA
| | - Kris Hauser
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
21
|
Ebrahimi A, Urias MG, Patel N, Taylor RH, Gehlbach P, Iordachita I. Adaptive Control Improves Sclera Force Safety in Robot-Assisted Eye Surgery: A Clinical Study. IEEE Trans Biomed Eng 2021; 68:3356-3365. [PMID: 33822717 PMCID: PMC8492795 DOI: 10.1109/tbme.2021.3071135] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The integration of robotics into retinal microsurgery leads to a reduction in surgeon perception of tool-to-tissue interaction forces. This blunting of human tactile sensory input, which is due to the inflexible mass and large inertia of the robotic arm as compared to the milli-Newton scale of the interaction forces and fragile tissues during ophthalmic surgery, identifies a potential iatrogenic risk during robotic eye surgery. In this paper, we aim to evaluate two variants of an adaptive force control scheme implemented on the Steady-Hand Eye Robot (SHER) that are intended to mitigate the risk of unsafe scleral forces. The present study enrolled ten retina fellows and ophthalmology residents into a simulated procedure, which simply asked the trainees to follow retinal vessels in a model retina surgery environment. For this purpose, we have developed a force-sensing (equipped with Fiber Bragg Grating (FBG)) instrument to attach to the robot. A piezo-actuated linear stage for creating random lateral motions to the eyeball phantom has been provided to simulate disturbances during surgery. The SHER and all of its dependencies were set up in an operating room in the Wilmer Eye Institute at the Johns Hopkins Hospital. The clinicians conducted robot-assisted experiments with the adaptive controls incorporated as well as freehand manipulations. The results indicate that the Adaptive Norm Control (ANC) method, is able to maintain scleral forces at predetermined safe levels better than even freehand manipulations. Novice clinicians in robot training however, subjectively preferred freehand maneuvers over robotic manipulations. Clinician preferences once highly skilled with the robot is not assessed in this study.
Collapse
|
22
|
|
23
|
Elder B, Zou Z, Ghosh S, Silverberg O, Greenwood TE, Demir E, Su VSE, Pak OS, Kong YL. A 3D-Printed Self-Learning Three-Linked-Sphere Robot for Autonomous Confined-Space Navigation. ADVANCED INTELLIGENT SYSTEMS (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2021; 3:2170064. [PMID: 35356413 PMCID: PMC8963778 DOI: 10.1002/aisy.202100039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Indexed: 06/14/2023]
Abstract
Reinforcement learning control methods can impart robots with the ability to discover effective behavior, reducing their modeling and sensing requirements, and enabling their ability to adapt to environmental changes. However, it remains challenging for a robot to achieve navigation in confined and dynamic environments, which are characteristic of a broad range of biomedical applications, such as endoscopy with ingestible electronics. Herein, a compact, 3D-printed three-linked-sphere robot synergistically integrated with a reinforcement learning algorithm that can perform adaptable, autonomous crawling in a confined channel is demonstrated. The scalable robot consists of three equally sized spheres that are linearly coupled, in which the extension and contraction in specific sequences dictate its navigation. The ability to achieve bidirectional locomotion across frictional surfaces in open and confined spaces without prior knowledge of the environment is also demonstrated. The synergistic integration of a highly scalable robotic apparatus and the model-free reinforcement learning control strategy can enable autonomous navigation in a broad range of dynamic and confined environments. This capability can enable sensing, imaging, and surgical processes in previously inaccessible confined environments in the human body.
Collapse
Affiliation(s)
- Brian Elder
- Department of Mechanical Engineering, University of Utah, Salt Lake City, UT 84112, USA
| | - Zonghao Zou
- Department of Mechanical Engineering, Santa Clara University, Santa Clara, CA 95053, USA
| | - Samannoy Ghosh
- Department of Mechanical Engineering, University of Utah, Salt Lake City, UT 84112, USA
| | - Oliver Silverberg
- Department of Mechanical Engineering, Santa Clara University, Santa Clara, CA 95053, USA
| | - Taylor E Greenwood
- Department of Mechanical Engineering, University of Utah, Salt Lake City, UT 84112, USA
| | - Ebru Demir
- Department of Mechanical Engineering, Santa Clara University, Santa Clara, CA 95053, USA
| | - Vivian Song-En Su
- Department of Mechanical Engineering, University of Utah, Salt Lake City, UT 84112, USA
| | - On Shun Pak
- Department of Mechanical Engineering, Santa Clara University, Santa Clara, CA 95053, USA
| | - Yong Lin Kong
- Department of Mechanical Engineering, University of Utah, Salt Lake City, UT 84112, USA
| |
Collapse
|