1
|
Kim JW, Wei S, Zhang P, Gehlbach P, Kang JU, Iordachita I, Kobilarov M. Towards Autonomous Retinal Microsurgery Using RGB-D Images. IEEE Robot Autom Lett 2024; 9:3807-3814. [PMID: 39309968 PMCID: PMC11415253 DOI: 10.1109/lra.2024.3368192] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Retinal surgery is a challenging procedure requiring precise manipulation of the fragile retinal tissue, often at the scale of tens-of-micrometers. Its difficulty has motivated the development of robotic assistance platforms to enable precise motion, and more recently, novel sensors such as microscope integrated optical coherence tomography (OCT) for RGB-D view of the surgical workspace. The combination of these devices opens new possibilities for robotic automation of tasks such as subretinal injection (SI), a procedure that involves precise needle insertion into the retina for targeted drug delivery. Motivated by this opportunity, we develop a framework for autonomous needle navigation during SI. We develop a system which enables the surgeon to specify waypoint goals in the microscope and OCT views, and the system autonomously navigates the needle to the desired subretinal space in real-time. Our system integrates OCT and microscope images with convolutional neural networks (CNNs) to automatically segment the surgical tool and retinal tissue boundaries, and model predictive control that generates optimal trajectories that respect kinematic constraints to ensure patient safety. We validate our system by demonstrating 30 successful SI trials on pig eyes. Preliminary comparisons to a human operator in robot-assisted mode highlight the enhanced safety and performance of our system.
Collapse
Affiliation(s)
- Ji Woong Kim
- Mechanical Engineering Dept at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Shuwen Wei
- Electrical and Computer Engineering Dept at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peiyao Zhang
- Mechanical Engineering Dept at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Gehlbach
- Johns Hopkins Wilmer Eye Institute, Baltimore, MD 21287 USA
| | - Jin U Kang
- Electrical and Computer Engineering Dept at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Iulian Iordachita
- Mechanical Engineering Dept at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Marin Kobilarov
- Mechanical Engineering Dept at the Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
2
|
Mach K, Wei S, Kim JW, Martin-Gomez A, Zhang P, Kang JU, Nasseri MA, Gehlbach P, Navab N, Iordachita I. OCT-guided Robotic Subretinal Needle Injections: A Deep Learning-Based Registration Approach. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE 2022; 2022:781-786. [PMID: 37396671 PMCID: PMC10312384 DOI: 10.1109/bibm55620.2022.9995143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Subretinal injection (SI) is an ophthalmic surgical procedure that allows for the direct injection of therapeutic substances into the subretinal space to treat vitreoretinal disorders. Although this treatment has grown in popularity, various factors contribute to its difficulty. These include the retina's fragile, nonregenerative tissue, as well as hand tremor and poor visual depth perception. In this context, the usage of robotic devices may reduce hand tremors and facilitate gradual and controlled SI. For the robot to successfully move to the target area, it needs to understand the spatial relationship between the attached needle and the tissue. The development of optical coherence tomography (OCT) imaging has resulted in a substantial advancement in visualizing retinal structures at micron resolution. This paper introduces a novel foundation for an OCT-guided robotic steering framework that enables a surgeon to plan and select targets within the OCT volume. At the same time, the robot automatically executes the trajectories necessary to achieve the selected targets. Our contribution consists of a novel combination of existing methods, creating an intraoperative OCT-Robot registration pipeline. We combined straightforward affine transformation computations with robot kinematics and a deep neural network-determined tool-tip location in OCT. We evaluate our framework's capability in a cadaveric pig eye open-sky procedure and using an aluminum target board. Targeting the subretinal space of the pig eye produced encouraging results with a mean Euclidean error of 23.8μm.
Collapse
Affiliation(s)
- Kristina Mach
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Shuwen Wei
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Ji Woong Kim
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Alejandro Martin-Gomez
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Peiyao Zhang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - Jin U Kang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| | - M Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität, Munich, Germany
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, USA
| | - Nassir Navab
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
- Chair for Computer Aided Medical Procedures, Technical University of Munich, Germany
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
3
|
Zuo R, Irsch K, Kang JU. Higher-order regression three-dimensional motion-compensation method for real-time optical coherence tomography volumetric imaging of the cornea. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210383GRR. [PMID: 35751143 PMCID: PMC9232272 DOI: 10.1117/1.jbo.27.6.066006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 06/08/2022] [Indexed: 06/15/2023]
Abstract
SIGNIFICANCE Optical coherence tomography (OCT) allows high-resolution volumetric three-dimensional (3D) imaging of biological tissues in vivo. However, 3D-image acquisition can be time-consuming and often suffers from motion artifacts due to involuntary and physiological movements of the tissue, limiting the reproducibility of quantitative measurements. AIM To achieve real-time 3D motion compensation for corneal tissue with high accuracy. APPROACH We propose an OCT system for volumetric imaging of the cornea, capable of compensating both axial and lateral motion with micron-scale accuracy and millisecond-scale time consumption based on higher-order regression. Specifically, the system first scans three reference B-mode images along the C-axis before acquiring a standard C-mode image. The difference between the reference and volumetric images is compared using a surface-detection algorithm and higher-order polynomials to deduce 3D motion and remove motion-related artifacts. RESULTS System parameters are optimized, and performance is evaluated using both phantom and corneal (ex vivo) samples. An overall motion-artifact error of <4.61 microns and processing time of about 3.40 ms for each B-scan was achieved. CONCLUSIONS Higher-order regression achieved effective and real-time compensation of 3D motion artifacts during corneal imaging. The approach can be expanded to 3D imaging of other ocular tissues. Implementing such motion-compensation strategies has the potential to improve the reliability of objective and quantitative information that can be extracted from volumetric OCT measurements.
Collapse
Affiliation(s)
- Ruizhi Zuo
- Johns Hopkins University, Whiting School of Engineering, Baltimore, Maryland, United States
| | - Kristina Irsch
- Vision Institute, CNRS, Paris, France
- Johns Hopkins University, School of Medicine, Baltimore, Maryland, United States
| | - Jin U. Kang
- Johns Hopkins University, Whiting School of Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, School of Medicine, Baltimore, Maryland, United States
| |
Collapse
|
4
|
Wei S, Kang JU. Stabilizing the phase of swept-source optical coherence tomography by a wrapped Gaussian mixture model. OPTICS LETTERS 2021; 46:2932-2935. [PMID: 34129577 PMCID: PMC9808914 DOI: 10.1364/ol.420898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 05/20/2021] [Indexed: 06/12/2023]
Abstract
The phase of an optical coherence tomography (OCT) signal carries critical information about particle micro-displacements. However, swept-source OCT (SSOCT) suffers from phase instability problems due to trigger jitters from the swept source. In this Letter, a wrapped Gaussian mixture model (WGMM) is proposed to stabilize the phase of SSOCT systems. A closed-form iteration solution of the WGMM is derived using the expectation-maximization algorithm. Necessary approximations are made for real-time graphic processing unit implementation. The performance of the proposed method is demonstrated through ex vivo, in vivo, and flow phantom experiments. The results show the robustness of the method in different application scenarios.
Collapse
|
5
|
Sommersperger M, Weiss J, Ali Nasseri M, Gehlbach P, Iordachita I, Navab N. Real-time tool to layer distance estimation for robotic subretinal injection using intraoperative 4D OCT. BIOMEDICAL OPTICS EXPRESS 2021; 12:1085-1104. [PMID: 33680560 PMCID: PMC7901333 DOI: 10.1364/boe.415477] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 01/15/2021] [Accepted: 01/19/2021] [Indexed: 05/24/2023]
Abstract
The emergence of robotics could enable ophthalmic microsurgical procedures that were previously not feasible due to the precision limits of manual delivery, for example, targeted subretinal injection. Determining the distance between the needle tip, the internal limiting membrane (ILM), and the retinal pigment epithelium (RPE) both precisely and reproducibly is required for safe and successful robotic retinal interventions. Recent advances in intraoperative optical coherence tomography (iOCT) have opened the path for 4D image-guided surgery by providing near video-rate imaging with micron-level resolution to visualize retinal structures, surgical instruments, and tool-tissue interactions. In this work, we present a novel pipeline to precisely estimate the distance between the injection needle and the surface boundaries of two retinal layers, the ILM and the RPE, from iOCT volumes. To achieve high computational efficiency, we reduce the analysis to the relevant area around the needle tip. We employ a convolutional neural network (CNN) to segment the tool surface, as well as the retinal layer boundaries from selected iOCT B-scans within this tip area. This results in the generation and processing of 3D surface point clouds for the tool, ILM and RPE from the B-scan segmentation maps, which in turn allows the estimation of the minimum distance between the resulting tool and layer point clouds. The proposed method is evaluated on iOCT volumes from ex-vivo porcine eyes and achieves an average error of 9.24 µm and 8.61 µm measuring the distance from the needle tip to the ILM and the RPE, respectively. The results demonstrate that this approach is robust to the high levels of noise present in iOCT B-scans and is suitable for the interventional use case by providing distance feedback at an average update rate of 15.66 Hz.
Collapse
Affiliation(s)
- Michael Sommersperger
- Johns Hopkins University, Baltimore, MD 21218, USA
- Technical University of Munich, Germany
| | | | - M. Ali Nasseri
- Technical University of Munich, Germany
- Klinikum Rechts der Isar, Augenklinik, Munich, Germany
| | | | | | - Nassir Navab
- Johns Hopkins University, Baltimore, MD 21218, USA
- Technical University of Munich, Germany
| |
Collapse
|
6
|
Wei S, Kang JU. Optical flow optical coherence tomography for determining accurate velocity fields. OPTICS EXPRESS 2020; 28:25502-25527. [PMID: 32907070 DOI: 10.1364/oe.396708] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 07/26/2020] [Indexed: 05/18/2023]
Abstract
Determining micron-scale fluid flow velocities using optical coherence tomography (OCT) is important in both biomedical research and clinical diagnosis. Numerous methods have been explored to quantify the flow information, which can be divided into either phase-based or amplitude-based methods. However, phase-based methods, such as Doppler methods, are less sensitive to transverse velocity components and suffer from wrapped phase and phase instability problems for axial velocity components. On the other hand, amplitude-based methods, such as speckle variance OCT, correlation mapping OCT and split-spectrum amplitude-decorrelation angiography, focus more on segmenting flow areas than quantifying flow velocities. In this paper, we propose optical flow OCT (OFOCT) to quantify accurate velocity fields. The equivalence between optical flow and real velocity fields is validated in OCT imaging. The sensitivity fall-off of a Fourier-domain OCT (FDOCT) system is considered in the modified optical flow continuity constraint. Spatial-temporal smoothness constraints are used to make the optical flow problem well-posed and reduce noises in the velocity fields. An iteration solution to the optical flow problem is implemented in a graphics processing unit (GPU) for real-time processing. The accuracy of the velocity fields is verified through phantom flow experiments by using a diluted milk powder solution as a scattering medium. Velocity fields are then used to detect flow turbulence and reconstruct flow trajectory. The results show that OFOCT is accurate in determining velocity fields and applicable to research concerning fluid dynamics.
Collapse
|
7
|
Guo S, Sarfaraz NR, Gensheimer WG, Krieger A, Kang JU. Demonstration of Optical Coherence Tomography Guided Big Bubble Technique for Deep Anterior Lamellar Keratoplasty (DALK). SENSORS 2020; 20:s20020428. [PMID: 31940877 PMCID: PMC7013995 DOI: 10.3390/s20020428] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/07/2020] [Accepted: 01/08/2020] [Indexed: 12/03/2022]
Abstract
Deep anterior lamellar keratoplasty (DALK) is a highly challenging procedure for cornea transplant that involves removing the corneal layers above Descemet’s membrane (DM). This is achieved by a “big bubble” technique where a needle is inserted into the stroma of the cornea down to DM and the injection of either air or liquid. DALK has important advantages over penetrating keratoplasty (PK) including lower rejection rate, less endothelial cell loss, and increased graft survival. In this paper, we successfully designed and evaluated the optical coherence tomography (OCT) distal sensor integrated needle for a precise big bubble technique. We successfully used this sensor for micro-control of a robotic DALK device termed AUTO-DALK for autonomous big bubble needle insertion. The OCT distal sensor was integrated inside a 25-gauge needle, which was used for pneumo-dissection. The AUTO-DALK device is built on a manual trephine platform which includes a vacuum ring to fix the device on the eye and add a needle driver at an angle of 60 degrees from vertical. During the test on five porcine eyes with a target depth of 90%, the measured insertion depth as a percentage of cornea thickness for the AUTO-DALK device was 90.05%±2.33% without any perforation compared to 79.16%±5.68% for unassisted free-hand insertion and 86.20%±5.31% for assisted free-hand insertion. The result showed a higher precision and consistency of the needle placement with AUTO-DALK, which could lead to better visual outcomes and fewer complications.
Collapse
Affiliation(s)
- Shoujing Guo
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA;
- Correspondence: ; Tel.: +1-443-858-6100
| | - Nicolas R. Sarfaraz
- Department of Mechanical Engineering, University of Maryland, College Park, MD 20742, USA; (N.R.S.); (A.K.)
| | - William G. Gensheimer
- Warfighter Eye Center, Malcolm Grow Medical Clinics and Surgery Center, Joint Base Andrews, MD 20762, USA;
| | - Axel Krieger
- Department of Mechanical Engineering, University of Maryland, College Park, MD 20742, USA; (N.R.S.); (A.K.)
| | - Jin U. Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA;
| |
Collapse
|