1
|
Posselli NR, Hwang ES, Olson ZJ, Nagiel A, Bernstein PS, Abbott JJ. Head-mounted surgical robots are an enabling technology for subretinal injections. Sci Robot 2025; 10:eadp7700. [PMID: 39970246 PMCID: PMC12061009 DOI: 10.1126/scirobotics.adp7700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 01/22/2025] [Indexed: 02/21/2025]
Abstract
Therapeutic protocols involving subretinal injection, which hold the promise of saving or restoring sight, are challenging for surgeons because they are at the limits of human motor and perceptual abilities. Excessive or insufficient indentation of the injection cannula into the retina or motion of the cannula with respect to the retina can result in retinal trauma or incorrect placement of the therapeutic product. Robotic assistance can potentially enable the surgeon to more precisely position the injection cannula and maintain its position for a prolonged period of time. However, head motion is common among patients undergoing eye surgery, complicating subretinal injections, yet it is often not considered in the evaluation of robotic assistance. No prior study has both included head motion during an evaluation of robotic assistance and demonstrated a significant improvement in the ability to perform subretinal injections compared with the manual approach. In a hybrid ex vivo and in situ study in which an enucleated eye was mounted on a human volunteer, we demonstrate that head-mounting a high-precision teleoperated surgical robot to passively reduce undesirable relative motion between the robot and the eye results in a bleb-formation success rate on moving eyes that is significantly higher than the manual success rates reported in the literature even on stationary enucleated eyes.
Collapse
Affiliation(s)
- Nicholas R. Posselli
- Robotics Center and Department of Mechanical Engineering, University of Utah, Salt Lake City, UT 84112, USA
- Department of Biomechanical Engineering, University of Twente, 7522 NB Enschede, The Netherlands
| | - Eileen S. Hwang
- Moran Eye Center, Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT 84132, USA
| | - Zachary J. Olson
- Robotics Center and Department of Mechanical Engineering, University of Utah, Salt Lake City, UT 84112, USA
| | - Aaron Nagiel
- Roski Eye Institute, Department of Ophthalmology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
- The Vision Center, Department of Surgery, Children’s Hospital Los Angeles, Los Angeles, CA 90027, USA
| | - Paul S. Bernstein
- Moran Eye Center, Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT 84132, USA
| | - Jake J. Abbott
- Robotics Center and Department of Mechanical Engineering, University of Utah, Salt Lake City, UT 84112, USA
| |
Collapse
|
2
|
Zuo R, Wei S, Wang Y, Irsch K, Kang JU. High-resolution in vivo 4D-OCT fish-eye imaging using 3D-UNet with multi-level residue decoder. BIOMEDICAL OPTICS EXPRESS 2024; 15:5533-5546. [PMID: 39296392 PMCID: PMC11407266 DOI: 10.1364/boe.532258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 07/18/2024] [Accepted: 08/09/2024] [Indexed: 09/21/2024]
Abstract
Optical coherence tomography (OCT) allows high-resolution volumetric imaging of biological tissues in vivo. However, 3D-image acquisition often suffers from motion artifacts due to slow frame rates and involuntary and physiological movements of living tissue. To solve these issues, we implement a real-time 4D-OCT system capable of reconstructing near-distortion-free volumetric images based on a deep learning-based reconstruction algorithm. The system initially collects undersampled volumetric images at a high speed and then upsamples the images in real-time by a convolutional neural network (CNN) that generates high-frequency features using a deep learning algorithm. We compare and analyze both dual-2D- and 3D-UNet-based networks for the OCT 3D high-resolution image reconstruction. We refine the network architecture by incorporating multi-level information to accelerate convergence and improve accuracy. The network is optimized by utilizing the 16-bit floating-point precision for network parameters to conserve GPU memory and enhance efficiency. The result shows that the refined and optimized 3D-network is capable of retrieving the tissue structure more precisely and enable real-time 4D-OCT imaging at a rate greater than 10 Hz with a root mean square error (RMSE) of ∼0.03.
Collapse
Affiliation(s)
- Ruizhi Zuo
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Shuwen Wei
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Yaning Wang
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Kristina Irsch
- CNRS, Vision Institute, Paris, France
- School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Jin U Kang
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
- School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Opfermann JD, Wang Y, Kaluna J, Suzuki K, Gensheimer W, Krieger A, Kang JU. Design and Evaluation of an Eye Mountable AutoDALK Robot for Deep Anterior Lamellar Keratoplasty. MICROMACHINES 2024; 15:788. [PMID: 38930758 PMCID: PMC11205909 DOI: 10.3390/mi15060788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 06/10/2024] [Accepted: 06/13/2024] [Indexed: 06/28/2024]
Abstract
Partial-thickness corneal transplants using a deep anterior lamellar keratoplasty (DALK) approach has demonstrated better patient outcomes than a full-thickness cornea transplant. However, despite better clinical outcomes from the DALK procedure, adoption of the technique has been limited because the accurate insertion of the needle into the deep stroma remains technically challenging. In this work, we present a novel hands-free eye mountable robot for automatic needle placement in the cornea, AutoDALK, that has the potential to simplify this critical step in the DALK procedure. The system integrates dual light-weight linear piezo motors, an OCT A-scan distance sensor, and a vacuum trephine-inspired design to enable the safe, consistent, and controllable insertion of a needle into the cornea for the pneumodissection of the anterior cornea from the deep posterior cornea and Descemet's membrane. AutoDALK was designed with feedback from expert corneal surgeons and performance was evaluated by finite element analysis simulation, benchtop testing, and ex vivo experiments to demonstrate the feasibility of the system for clinical applications. The mean open-loop positional deviation was 9.39 µm, while the system repeatability and accuracy were 39.48 µm and 43.18 µm, respectively. The maximum combined thrust of the system was found to be 1.72 N, which exceeds the clinical penetration force of the cornea. In a head-to-head ex vivo comparison against an expert surgeon using a freehand approach, AutoDALK achieved more consistent needle depth, which resulted in fewer perforations of Descemet's membrane and significantly deeper pneumodissection of the stromal tissue. The results of this study indicate that robotic needle insertion has the potential to simplify the most challenging task of the DALK procedure, enable more consistent surgical outcomes for patients, and standardize partial-thickness corneal transplants as the gold standard of care if demonstrated to be more safe and more effective than penetrating keratoplasty.
Collapse
Affiliation(s)
- Justin D. Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (J.K.); (K.S.); (A.K.)
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; (Y.W.); (J.U.K.)
| | - Yaning Wang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; (Y.W.); (J.U.K.)
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - James Kaluna
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (J.K.); (K.S.); (A.K.)
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; (Y.W.); (J.U.K.)
| | - Kensei Suzuki
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (J.K.); (K.S.); (A.K.)
| | - William Gensheimer
- Ophthalmology Section, White River Junction Veterans Affairs Medical Center, White River Junction, VT 05009, USA;
- Ophthalmology Section, Dartmouth—Hitchcock Medical Center, Lebanon, NH 03766, USA
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (J.K.); (K.S.); (A.K.)
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; (Y.W.); (J.U.K.)
| | - Jin U. Kang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; (Y.W.); (J.U.K.)
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
4
|
Wang Y, Wei S, Zuo R, Kam M, Opfermann JD, Sunmola I, Hsieh MH, Krieger A, Kang JU. Automatic and real-time tissue sensing for autonomous intestinal anastomosis using hybrid MLP-DC-CNN classifier-based optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:2543-2560. [PMID: 38633079 PMCID: PMC11019703 DOI: 10.1364/boe.521652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/18/2024] [Accepted: 03/18/2024] [Indexed: 04/19/2024]
Abstract
Anastomosis is a common and critical part of reconstructive procedures within gastrointestinal, urologic, and gynecologic surgery. The use of autonomous surgical robots such as the smart tissue autonomous robot (STAR) system demonstrates an improved efficiency and consistency of the laparoscopic small bowel anastomosis over the current da Vinci surgical system. However, the STAR workflow requires auxiliary manual monitoring during the suturing procedure to avoid missed or wrong stitches. To eliminate this monitoring task from the operators, we integrated an optical coherence tomography (OCT) fiber sensor with the suture tool and developed an automatic tissue classification algorithm for detecting missed or wrong stitches in real time. The classification results were updated and sent to the control loop of STAR robot in real time. The suture tool was guided to approach the object by a dual-camera system. If the tissue inside the tool jaw was inconsistent with the desired suture pattern, a warning message would be generated. The proposed hybrid multilayer perceptron dual-channel convolutional neural network (MLP-DC-CNN) classification platform can automatically classify eight different abdominal tissue types that require different suture strategies for anastomosis. In MLP, numerous handcrafted features (∼1955) were utilized including optical properties and morphological features of one-dimensional (1D) OCT A-line signals. In DC-CNN, intensity-based features and depth-resolved tissues' attenuation coefficients were fully exploited. A decision fusion technique was applied to leverage the information collected from both classifiers to further increase the accuracy. The algorithm was evaluated on 69,773 testing A-line data. The results showed that our model can classify the 1D OCT signals of small bowels in real time with an accuracy of 90.06%, a precision of 88.34%, and a sensitivity of 87.29%, respectively. The refresh rate of the displayed A-line signals was set as 300 Hz, the maximum sensing depth of the fiber was 3.6 mm, and the running time of the image processing algorithm was ∼1.56 s for 1,024 A-lines. The proposed fully automated tissue sensing model outperformed the single classifier of CNN, MLP, or SVM with optimized architectures, showing the complementarity of different feature sets and network architectures in classifying intestinal OCT A-line signals. It can potentially reduce the manual involvement of robotic laparoscopic surgery, which is a crucial step towards a fully autonomous STAR system.
Collapse
Affiliation(s)
- Yaning Wang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Ruizhi Zuo
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael Kam
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Justin D. Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Idris Sunmola
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael H. Hsieh
- Division of Urology, Children’s National Hospital, 111 Michigan Ave NW, Washington, D.C. 20010, USA
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Jin U. Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| |
Collapse
|
5
|
Posselli NR, Bernstein PS, Abbott JJ. Eye-mounting goggles to bridge the gap between benchtop experiments and in vivo robotic eye surgery. Sci Rep 2023; 13:15503. [PMID: 37726336 PMCID: PMC10509142 DOI: 10.1038/s41598-023-42561-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 09/12/2023] [Indexed: 09/21/2023] Open
Abstract
A variety of robot-assisted surgical systems have been proposed to improve the precision of eye surgery. Evaluation of these systems has typically relied on benchtop experiments with artificial or enucleated eyes. However, this does not properly account for the types of head motion that are common among patients undergoing eye surgery, which a clinical robotic system will encounter. In vivo experiments are clinically realistic, but they are risky and thus require the robotic system to be at a sufficiently mature state of development. In this paper, we describe a low-cost device that enables an artificial or enucleated eye to be mounted to standard swim goggles worn by a human volunteer to enable more realistic evaluation of eye-surgery robots after benchtop studies and prior to in vivo studies. The mounted eye can rotate about its center, with a rotational stiffness matching that of an anesthetized patient's eye. We describe surgeon feedback and technical analyses to verify that various aspects of the design are sufficient for simulating a patient's eye during surgery.
Collapse
Affiliation(s)
- Nicholas R Posselli
- Robotics Center and Department of Mechanical Engineering, University of Utah, Salt Lake City, UT, USA.
| | - Paul S Bernstein
- Department of Ophthalmology and Visual Sciences, Moran Eye Center, University of Utah, Salt Lake City, UT, USA
| | - Jake J Abbott
- Robotics Center and Department of Mechanical Engineering, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
6
|
Iovieno A, Fontana L, Coassin M, Bovio D, Salito C. Ex Vivo Evaluation of a Pressure-Sensitive Device to Aid Big Bubble Intrastromal Dissection in Deep Anterior Lamellar Keratoplasty. Transl Vis Sci Technol 2022; 11:17. [PMID: 36580320 PMCID: PMC9804022 DOI: 10.1167/tvst.11.12.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Purpose To develop and perform ex vivo testing for a device designed for semiquantitative determination of intracorneal dissection depth during big bubble (BB) deep anterior lamellar keratoplasty. Methods A prototype device connected to a syringe and cannula was designed to determine depth of intrastromal placement based on air rebound pressure emitted by a software controlled generator. Ex vivo testing of the device was conducted on human corneas mounted on an artificial anterior chamber in three experiments: (1) cannula purposely introduced at different depths measured with anterior segment optical coherence tomography, (2) cannula introduced as per the BB technique, and (3) simulation of the BB technique guided by the device. Results A positive pressure differential and successful BB were observed only when the cannula was positioned within 150 microns from the endothelial plane. In all successful BB cases (21/40), a repeatable increase in tissue rebound pressure was detected, which was not recorded in unsuccessful cases. The device was able to signal to the surgeon correct placement of the cannula (successful BB) in 16 of 17 cases and incorrect placement of the cannula (unsuccessful BB) in 8 of 8 cases (94.1% sensitivity, 100% specificity). Conclusions In our ex vivo model, this novel medical device could reliably signal cannula positioning in the deep stroma for effective pneumatic dissection and possibly aid technical execution of BB deep anterior lamellar keratoplasty. Translational Relevance A medical device that standardizes big bubble deep anterior lamellar keratoplasty could increase the overall success rate of the surgical procedure and aid popularization of deep anterior lamellar keratoplasty.
Collapse
Affiliation(s)
- Alfonso Iovieno
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada,IRCCS Azienda Ospedaliero-Universitaria di Bologna, Italy
| | - Luigi Fontana
- IRCCS Azienda Ospedaliero-Universitaria di Bologna, Italy
| | - Marco Coassin
- IRCCS Azienda Ospedaliero-Universitaria di Bologna, Italy,Department of Ophthalmology, University Campus Bio-medico, Rome, Italy
| | - Dario Bovio
- Biocubica Biomedical Engineering, Milan, Italy
| | | |
Collapse
|
7
|
Tian Y, Draelos M, McNabb RP, Hauser K, Kuo AN, Izatt JA. Optical coherence tomography refraction and optical path length correction for image-guided corneal surgery. BIOMEDICAL OPTICS EXPRESS 2022; 13:5035-5049. [PMID: 36187253 PMCID: PMC9484446 DOI: 10.1364/boe.464762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 08/05/2022] [Accepted: 08/21/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) may be useful for guidance of ocular microsurgeries such as deep anterior lamellar keratoplasty (DALK), a form of corneal transplantation that requires delicate insertion of a needle into the stroma to approximately 90% of the corneal thickness. However, visualization of the true shape of the cornea and the surgical tool during surgery is impaired in raw OCT volumes due to both light refraction at the corneal boundaries, as well as geometrical optical path length distortion due to the group velocity of broadband OCT light in tissue. Therefore, uncorrected B-scans or volumes may not provide an accurate visualization suitable for reliable surgical guidance. In this article, we introduce a method to correct for both refraction and optical path length distortion in 3D in order to reconstruct corrected OCT B-scans in both natural corneas and corneas deformed by needle insertion. We delineate the separate roles of phase and group index in OCT image distortion correction, and introduce a method to estimate the phase index from the group index which is readily measured in samples. Using the measured group index and estimated phase index of human corneas at 1060 nm, we demonstrate quantitatively accurate geometric reconstructions of the true cornea and inserted needle shape during simulated DALK surgeries.
Collapse
Affiliation(s)
- Yuan Tian
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Mark Draelos
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Ryan P. McNabb
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Kris Hauser
- Department of Computer Science, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| | - Anthony N. Kuo
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| | - Joseph A. Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
8
|
Guo S, Kang JU. Convolutional neural network-based common-path optical coherence tomography A-scan boundary-tracking training and validation using a parallel Monte Carlo synthetic dataset. OPTICS EXPRESS 2022; 30:25876-25890. [PMID: 36237108 PMCID: PMC9363032 DOI: 10.1364/oe.462980] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/16/2022] [Accepted: 06/19/2022] [Indexed: 06/16/2023]
Abstract
We present a parallel Monte Carlo (MC) simulation platform for rapidly generating synthetic common-path optical coherence tomography (CP-OCT) A-scan image dataset for image-guided needle insertion. The computation time of the method has been evaluated on different configurations and 100000 A-scan images are generated based on 50 different eye models. The synthetic dataset is used to train an end-to-end convolutional neural network (Ascan-Net) to localize the Descemet's membrane (DM) during the needle insertion. The trained Ascan-Net has been tested on the A-scan images collected from the ex-vivo human and porcine cornea as well as simulated data and shows improved tracking accuracy compared to the result by using the Canny-edge detector.
Collapse
Affiliation(s)
- Shoujing Guo
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Jin U. Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| |
Collapse
|
9
|
Zuo R, Irsch K, Kang JU. Higher-order regression three-dimensional motion-compensation method for real-time optical coherence tomography volumetric imaging of the cornea. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210383GRR. [PMID: 35751143 PMCID: PMC9232272 DOI: 10.1117/1.jbo.27.6.066006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 06/08/2022] [Indexed: 06/15/2023]
Abstract
SIGNIFICANCE Optical coherence tomography (OCT) allows high-resolution volumetric three-dimensional (3D) imaging of biological tissues in vivo. However, 3D-image acquisition can be time-consuming and often suffers from motion artifacts due to involuntary and physiological movements of the tissue, limiting the reproducibility of quantitative measurements. AIM To achieve real-time 3D motion compensation for corneal tissue with high accuracy. APPROACH We propose an OCT system for volumetric imaging of the cornea, capable of compensating both axial and lateral motion with micron-scale accuracy and millisecond-scale time consumption based on higher-order regression. Specifically, the system first scans three reference B-mode images along the C-axis before acquiring a standard C-mode image. The difference between the reference and volumetric images is compared using a surface-detection algorithm and higher-order polynomials to deduce 3D motion and remove motion-related artifacts. RESULTS System parameters are optimized, and performance is evaluated using both phantom and corneal (ex vivo) samples. An overall motion-artifact error of <4.61 microns and processing time of about 3.40 ms for each B-scan was achieved. CONCLUSIONS Higher-order regression achieved effective and real-time compensation of 3D motion artifacts during corneal imaging. The approach can be expanded to 3D imaging of other ocular tissues. Implementing such motion-compensation strategies has the potential to improve the reliability of objective and quantitative information that can be extracted from volumetric OCT measurements.
Collapse
Affiliation(s)
- Ruizhi Zuo
- Johns Hopkins University, Whiting School of Engineering, Baltimore, Maryland, United States
| | - Kristina Irsch
- Vision Institute, CNRS, Paris, France
- Johns Hopkins University, School of Medicine, Baltimore, Maryland, United States
| | - Jin U. Kang
- Johns Hopkins University, Whiting School of Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, School of Medicine, Baltimore, Maryland, United States
| |
Collapse
|
10
|
Edwards W, Tang G, Tian Y, Draelos M, Izatt J, Kuo A, Hauser K. Data-Driven Modelling and Control for Robot Needle Insertion in Deep Anterior Lamellar Keratoplasty. IEEE Robot Autom Lett 2022; 7:1526-1533. [PMID: 37090091 PMCID: PMC10117280 DOI: 10.1109/lra.2022.3140458] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Deep anterior lamellar keratoplasty (DALK) is a technique for cornea transplantation which is associated with reduced patient morbidity. DALK has been explored as a potential application of robot microsurgery because the small scales, fine control requirements, and difficulty of visualization make it very challenging for human surgeons to perform. We address the problem of modelling the small scale interactions between the surgical tool and the cornea tissue to improve the accuracy of needle insertion, since accurate placement within 5% of target depth has been associated with more reliable clinical outcomes. We develop a data-driven autoregressive dynamic model of the tool-tissue interaction and a model predictive controller to guide robot needle insertion. In an ex vivo model, our controller significantly improves the accuracy of needle positioning by more than 40% compared to prior methods.
Collapse
Affiliation(s)
- William Edwards
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Gao Tang
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Yuan Tian
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Mark Draelos
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Joseph Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Anthony Kuo
- Department of Ophthalmology, Duke University, Durham, NC 27710, USA
| | - Kris Hauser
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
11
|
Guo S, Opfermann J, Gemsheimer WG, Krieger A, Kang JU. Downward viewing common-path optical coherence tomography guided hydro-dissection needle for DALK. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 11953:1195308. [PMID: 36277992 PMCID: PMC9583597 DOI: 10.1117/12.2607813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deep anterior lamellar keratoplasty (DALK) is a partial-thickness cornea transplant procedure in which only the recipient's stroma is replaced, leaving the host's Descemet's membrane (DM) and endothelium intact. This highly challenging "Big Bubble" procedure requires micron accuracy to insert a hydro-dissection needle as close as possible to the DM. Here, we report the design and evaluation of a downward viewing common-path optical coherence tomography (OCT) guided hydro-dissection needle for DALK. This design offers the flexibility of using different insertion angles and needle sizes. With the fiber situated outside the needle and eye, the needle can use its' full lumen for a smoother air/fluid injection and image quality is improved. The common-path OCT probe uses a bare optical fiber with its tip cleaved at the right angle for both reference and sample arm which is encapsulated in a 25-gauge stainless still tube. The fiber was set up vertically with a half-ball epoxy lens at the end to provide an A-scan with an 11-degree downward field of view. The hydro dissection needle was set up at 70 degrees from vertical and the relative position between the fiber end and the needle tip remained constant during the insertion. The fiber and needle were aligned by a customized needle driver to allow the needle tip and tissue underneath to both be imaged within the same A-scan. Fresh porcine eyes (N = 5) were used for the studies. The needle tip position, the stroma, and DM were successfully identified from the A-scan during the whole insertion process. The results showed the downward viewing OCT distal sensor can accurately guide the needle insertion for DALK and improved the average insertion depth compared to freehand insertion.
Collapse
Affiliation(s)
- Shoujing Guo
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| | - Justin Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA, 21218
| | - William G. Gemsheimer
- Department of Ophthalmology, White River Junction VA Medical Center, VT, USA, 05009
- Department of Ophthalmology, Dartmouth-Hitchcock, NH, USA, 03766
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA, 21218
| | - Jin U. Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA 21218
| |
Collapse
|
12
|
Guo S, Kang JU. Convolutional Neural Network-based Optical Coherence Tomography (OCT) A-scan Segmentation and Tracking Platform using Advanced Monte Carlo Simulation. BIOMEDICAL OPTICS (WASHINGTON, D.C.) 2021; 2021:JW1A.16. [PMID: 37986718 PMCID: PMC10657779 DOI: 10.1364/boda.2021.jw1a.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
We reported a parallel Monte Carlo simulation platform for generating OCT cornea images and training the convolutional neural network. The trained network showed improved segmentation results when applied to the ex-vivo cornea A-scan images.
Collapse
Affiliation(s)
- Shoujing Guo
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jin U Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|