1
|
Ma G, McCloud M, Tian Y, Narawane A, Shi H, Trout R, McNabb RP, Kuo AN, Draelos M. Robotics and optical coherence tomography: current works and future perspectives [Invited]. BIOMEDICAL OPTICS EXPRESS 2025; 16:578-602. [PMID: 39958851 PMCID: PMC11828438 DOI: 10.1364/boe.547943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 12/29/2024] [Accepted: 01/01/2025] [Indexed: 02/18/2025]
Abstract
Optical coherence tomography (OCT) is an interferometric technique for micron-level imaging in biological and non-biological contexts. As a non-invasive, non-ionizing, and video-rate imaging modality, OCT is widely used in biomedical and clinical applications, especially ophthalmology, where it functions in many roles, including tissue mapping, disease diagnosis, and intrasurgical visualization. In recent years, the rapid growth of medical robotics has led to new applications for OCT, primarily for 3D free-space scanning, volumetric perception, and novel optical designs for specialized medical applications. This review paper surveys these recent developments at the intersection of OCT and robotics and organizes them by degree of integration and application, with a focus on biomedical and clinical topics. We conclude with perspectives on how these recent innovations may lead to further advances in imaging and medical technology.
Collapse
Affiliation(s)
- Guangshen Ma
- Department of Robotics, University of Michigan Ann Arbor, MI 48105, USA
| | - Morgan McCloud
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Yuan Tian
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Amit Narawane
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Harvey Shi
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Robert Trout
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
| | - Ryan P McNabb
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27705, USA
| | - Anthony N Kuo
- Department of Biomedical Engineering, Duke University, Durham, NC 27705, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27705, USA
| | - Mark Draelos
- Department of Robotics, University of Michigan Ann Arbor, MI 48105, USA
- Department of Ophthalmology and Visual Sciences, University of Michigan Medical School, Ann Arbor, MI 48105, USA
| |
Collapse
|
2
|
Optical force estimation for interactions between tool and soft tissues. Sci Rep 2023; 13:506. [PMID: 36627354 PMCID: PMC9831996 DOI: 10.1038/s41598-022-27036-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 12/23/2022] [Indexed: 01/11/2023] Open
Abstract
Robotic assistance in minimally invasive surgery offers numerous advantages for both patient and surgeon. However, the lack of force feedback in robotic surgery is a major limitation, and accurately estimating tool-tissue interaction forces remains a challenge. Image-based force estimation offers a promising solution without the need to integrate sensors into surgical tools. In this indirect approach, interaction forces are derived from the observed deformation, with learning-based methods improving accuracy and real-time capability. However, the relationship between deformation and force is determined by the stiffness of the tissue. Consequently, both deformation and local tissue properties must be observed for an approach applicable to heterogeneous tissue. In this work, we use optical coherence tomography, which can combine the detection of tissue deformation with shear wave elastography in a single modality. We present a multi-input deep learning network for processing of local elasticity estimates and volumetric image data. Our results demonstrate that accounting for elastic properties is critical for accurate image-based force estimation across different tissue types and properties. Joint processing of local elasticity information yields the best performance throughout our phantom study. Furthermore, we test our approach on soft tissue samples that were not present during training and show that generalization to other tissue properties is possible.
Collapse
|
3
|
Surgical Tool Datasets for Machine Learning Research: A Survey. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01640-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractThis paper is a comprehensive survey of datasets for surgical tool detection and related surgical data science and machine learning techniques and algorithms. The survey offers a high level perspective of current research in this area, analyses the taxonomy of approaches adopted by researchers using surgical tool datasets, and addresses key areas of research, such as the datasets used, evaluation metrics applied and deep learning techniques utilised. Our presentation and taxonomy provides a framework that facilitates greater understanding of current work, and highlights the challenges and opportunities for further innovative and useful research.
Collapse
|
4
|
Tang EM, El-Haddad MT, Patel SN, Tao YK. Automated instrument-tracking for 4D video-rate imaging of ophthalmic surgical maneuvers. BIOMEDICAL OPTICS EXPRESS 2022; 13:1471-1484. [PMID: 35414968 PMCID: PMC8973184 DOI: 10.1364/boe.450814] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 02/07/2022] [Accepted: 02/09/2022] [Indexed: 05/11/2023]
Abstract
Intraoperative image-guidance provides enhanced feedback that facilitates surgical decision-making in a wide variety of medical fields and is especially useful when haptic feedback is limited. In these cases, automated instrument-tracking and localization are essential to guide surgical maneuvers and prevent damage to underlying tissue. However, instrument-tracking is challenging and often confounded by variations in the surgical environment, resulting in a trade-off between accuracy and speed. Ophthalmic microsurgery presents additional challenges due to the nonrigid relationship between instrument motion and instrument deformation inside the eye, image field distortion, image artifacts, and bulk motion due to patient movement and physiological tremor. We present an automated instrument-tracking method by leveraging multimodal imaging and deep-learning to dynamically detect surgical instrument positions and re-center imaging fields for 4D video-rate visualization of ophthalmic surgical maneuvers. We are able to achieve resolution-limited tracking accuracy at varying instrument orientations as well as at extreme instrument speeds and image defocus beyond typical use cases. As proof-of-concept, we perform automated instrument-tracking and 4D imaging of a mock surgical task. Here, we apply our methods for specific applications in ophthalmic microsurgery, but the proposed technologies are broadly applicable for intraoperative image-guidance with high speed and accuracy.
Collapse
Affiliation(s)
- Eric M. Tang
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37232, USA
| | - Mohamed T. El-Haddad
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37232, USA
| | - Shriji N. Patel
- Vanderbilt Eye Institute, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Yuankai K. Tao
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37232, USA
| |
Collapse
|
5
|
Zhou M, Wu J, Ebrahimi A, Patel N, He C, Gehlbach P, Taylor RH, Knoll A, Nasseri MA, Iordachita I. Spotlight-based 3D Instrument Guidance for Retinal Surgery. ... INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS. INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS 2021; 2020. [PMID: 34595483 DOI: 10.1109/ismr48331.2020.9312952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Retinal surgery is a complex activity that can be challenging for a surgeon to perform effectively and safely. Image guided robot-assisted surgery is one of the promising solutions that bring significant surgical enhancement in treatment outcome and reduce the physical limitations of human surgeons. In this paper, we demonstrate a novel method for 3D guidance of the instrument based on the projection of spotlight in the single microscope images. The spotlight projection mechanism is firstly analyzed and modeled with a projection on both a plane and a sphere surface. To test the feasibility of the proposed method, a light fiber is integrated into the instrument which is driven by the Steady-Hand Eye Robot (SHER). The spot of light is segmented and tracked on a phantom retina using the proposed algorithm. The static calibration and dynamic test results both show that the proposed method can easily archive 0.5 mm of tip-to-surface distance which is within the clinically acceptable accuracy for intraocular visual guidance.
Collapse
Affiliation(s)
- Mingchuan Zhou
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA.,Chair of Robotics, Artificial Intelligence and Real-time Systems, Technische Universität München, München 85748 Germany
| | - Jiahao Wu
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA.,T Stone Robotics Institute, the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, HKSAR, China
| | - Ali Ebrahimi
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Niravkumar Patel
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Changyan He
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Russell H Taylor
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Alois Knoll
- Chair of Robotics, Artificial Intelligence and Real-time Systems, Technische Universität München, München 85748 Germany
| | - M Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München 81675 Germany
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
6
|
Bengs M, Gessert N, Schlaefer A. 4D spatio-temporal convolutional networks for object position estimation in OCT volumes. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2020. [DOI: 10.1515/cdbme-2020-0001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Tracking and localizing objects is a central problem in computer-assisted surgery. Optical coherence tomography (OCT) can be employed as an optical tracking system, due to its high spatial and temporal resolution. Recently, 3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single volumetric OCT images. While this approach relied on spatial information only, OCT allows for a temporal stream of OCT image volumes capturing the motion of an object at high volumes rates. In this work, we systematically extend 3D CNNs to 4D spatio-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking. Across various architectures, our results demonstrate that using a stream of OCT volumes and employing 4D spatio-temporal convolutions leads to a 30% lower mean absolute error compared to single volume processing with 3D CNNs.
Collapse
Affiliation(s)
- Marcel Bengs
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology , Hamburg , Germany
| | - Nils Gessert
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology , Hamburg , Germany
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology , Hamburg , Germany
| |
Collapse
|
7
|
Spatio-temporal deep learning methods for motion estimation using 4D OCT image data. Int J Comput Assist Radiol Surg 2020; 15:943-952. [PMID: 32445128 PMCID: PMC7303100 DOI: 10.1007/s11548-020-02178-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 04/21/2020] [Indexed: 11/27/2022]
Abstract
PURPOSE Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions. Optical coherence tomography (OCT) is an imaging modality with a high spatial and temporal resolution that has been used for intraoperative imaging and also for motion estimation, for example, in the context of ophthalmic surgery or cochleostomy. Recently, motion estimation between a template and a moving OCT image has been studied with deep learning methods to overcome the shortcomings of conventional, feature-based methods. METHODS We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance. For this purpose, we design and evaluate several 3D and 4D deep learning methods and we propose a new deep learning approach. Also, we propose a temporal regularization strategy at the model output. RESULTS Using a tissue dataset without additional markers, our deep learning methods using 4D data outperform previous approaches. The best performing 4D architecture achieves an correlation coefficient (aCC) of 98.58% compared to 85.0% of a previous 3D deep learning method. Also, our temporal regularization strategy at the output further improves 4D model performance to an aCC of 99.06%. In particular, our 4D method works well for larger motion and is robust toward image rotations and motion distortions. CONCLUSIONS We propose 4D spatio-temporal deep learning for OCT-based motion estimation. On a tissue dataset, we find that using 4D information for the model input improves performance while maintaining reasonable inference times. Our regularization strategy demonstrates that additional temporal information is also beneficial at the model output.
Collapse
|
8
|
Gessert N, Bengs M, Schlüter M, Schlaefer A. Deep learning with 4D spatio-temporal data representations for OCT-based force estimation. Med Image Anal 2020; 64:101730. [PMID: 32492583 DOI: 10.1016/j.media.2020.101730] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 05/20/2020] [Accepted: 05/20/2020] [Indexed: 10/24/2022]
Abstract
Estimating the forces acting between instruments and tissue is a challenging problem for robot-assisted minimally-invasive surgery. Recently, numerous vision-based methods have been proposed to replace electro-mechanical approaches. Moreover, optical coherence tomography (OCT) and deep learning have been used for estimating forces based on deformation observed in volumetric image data. The method demonstrated the advantage of deep learning with 3D volumetric data over 2D depth images for force estimation. In this work, we extend the problem of deep learning-based force estimation to 4D spatio-temporal data with streams of 3D OCT volumes. For this purpose, we design and evaluate several methods extending spatio-temporal deep learning to 4D which is largely unexplored so far. Furthermore, we provide an in-depth analysis of multi-dimensional image data representations for force estimation, comparing our 4D approach to previous, lower-dimensional methods. Also, we analyze the effect of temporal information and we study the prediction of short-term future force values, which could facilitate safety features. For our 4D force estimation architectures, we find that efficient decoupling of spatial and temporal processing is advantageous. We show that using 4D spatio-temporal data outperforms all previously used data representations with a mean absolute error of 10.7 mN. We find that temporal information is valuable for force estimation and we demonstrate the feasibility of force prediction.
Collapse
Affiliation(s)
- Nils Gessert
- Hamburg University of Technology, Institute of Medical Technology, Am Schwarzenberg-Campus 3, Hamburg 21073 Germany.
| | - Marcel Bengs
- Hamburg University of Technology, Institute of Medical Technology, Am Schwarzenberg-Campus 3, Hamburg 21073 Germany
| | - Matthias Schlüter
- Hamburg University of Technology, Institute of Medical Technology, Am Schwarzenberg-Campus 3, Hamburg 21073 Germany
| | - Alexander Schlaefer
- Hamburg University of Technology, Institute of Medical Technology, Am Schwarzenberg-Campus 3, Hamburg 21073 Germany
| |
Collapse
|
9
|
Schlüter M, Glandorf L, Gromniak M, Saathoff T, Schlaefer A. Concept for Markerless 6D Tracking Employing Volumetric Optical Coherence Tomography. SENSORS 2020; 20:s20092678. [PMID: 32397153 PMCID: PMC7248981 DOI: 10.3390/s20092678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 04/21/2020] [Accepted: 05/05/2020] [Indexed: 11/16/2022]
Abstract
Optical tracking systems are widely used, for example, to navigate medical interventions. Typically, they require the presence of known geometrical structures, the placement of artificial markers, or a prominent texture on the target’s surface. In this work, we propose a 6D tracking approach employing volumetric optical coherence tomography (OCT) images. OCT has a micrometer-scale resolution and employs near-infrared light to penetrate few millimeters into, for example, tissue. Thereby, it provides sub-surface information which we use to track arbitrary targets, even with poorly structured surfaces, without requiring markers. Our proposed system can shift the OCT’s field-of-view in space and uses an adaptive correlation filter to estimate the motion at multiple locations on the target. This allows one to estimate the target’s position and orientation. We show that our approach is able to track translational motion with root-mean-squared errors below 0.25 mm and in-plane rotations with errors below 0.3°. For out-of-plane rotations, our prototypical system can achieve errors around 0.6°.
Collapse
|
10
|
Al Hajj H, Lamard M, Conze PH, Roychowdhury S, Hu X, Maršalkaitė G, Zisimopoulos O, Dedmari MA, Zhao F, Prellberg J, Sahu M, Galdran A, Araújo T, Vo DM, Panda C, Dahiya N, Kondo S, Bian Z, Vahdat A, Bialopetravičius J, Flouty E, Qiu C, Dill S, Mukhopadhyay A, Costa P, Aresta G, Ramamurthy S, Lee SW, Campilho A, Zachow S, Xia S, Conjeti S, Stoyanov D, Armaitis J, Heng PA, Macready WG, Cochener B, Quellec G. CATARACTS: Challenge on automatic tool annotation for cataRACT surgery. Med Image Anal 2018; 52:24-41. [PMID: 30468970 DOI: 10.1016/j.media.2018.11.008] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 11/13/2018] [Accepted: 11/15/2018] [Indexed: 12/29/2022]
Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.
Collapse
Affiliation(s)
| | - Mathieu Lamard
- Inserm, UMR 1101, Brest, F-29200, France; Univ Bretagne Occidentale, Brest, F-29200, France
| | - Pierre-Henri Conze
- Inserm, UMR 1101, Brest, F-29200, France; IMT Atlantique, LaTIM UMR 1101, UBL, Brest, F-29200, France
| | | | - Xiaowei Hu
- Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | | | - Muneer Ahmad Dedmari
- Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany
| | - Fenqiang Zhao
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Jonas Prellberg
- Dept. of Informatics, Carl von Ossietzky University, Oldenburg, 26129, Germany
| | - Manish Sahu
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Adrian Galdran
- INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Teresa Araújo
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Duc My Vo
- Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea
| | | | - Navdeep Dahiya
- Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA
| | | | | | - Arash Vahdat
- D-Wave Systems Inc., Burnaby, BC, V5G 4M9, Canada
| | | | | | - Chenhui Qiu
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Sabrina Dill
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Anirban Mukhopadhyay
- Department of Computer Science, Technische Universität Darmstadt, Darmstadt, 64283, Germany
| | - Pedro Costa
- INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Guilherme Aresta
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Senthil Ramamurthy
- Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA
| | - Sang-Woong Lee
- Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea
| | - Aurélio Campilho
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Stefan Zachow
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Shunren Xia
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Sailesh Conjeti
- Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany; German Center for Neurodegenrative Diseases (DZNE), Bonn, 53127, Germany
| | - Danail Stoyanov
- Digital Surgery Ltd, EC1V 2QY, London, UK; University College London, Gower Street, WC1E 6BT, London, UK
| | | | - Pheng-Ann Heng
- Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Béatrice Cochener
- Inserm, UMR 1101, Brest, F-29200, France; Univ Bretagne Occidentale, Brest, F-29200, France; Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
11
|
Laves MH, Kahrs LA, Ortmaier T. Volumetric 3D stitching of optical coherence tomography volumes. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2018. [DOI: 10.1515/cdbme-2018-0079] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
AbstractOptical coherence tomography (OCT) is a noninvasive medical imaging modality, which provides highresolution transectional images of biological tissue. However, its potential is limited due to a relatively small field of view. To overcome this drawback, we describe a scheme for fully automated stitching of multiple 3D OCT volumes for panoramic imaging. The voxel displacements between two adjacent images are calculated by extending the Lucas-Kanade optical flow a lgorithm to dense volumetric images. A RANSAC robust estimator is used to obtain rigid transformations out of the resulting flow v ectors. T he i mages a re t ransformed into the same coordinate frame and overlapping areas are blended. The accuracy of the proposed stitching scheme is evaluated on two datasets of 7 and 4 OCT volumes, respectively. By placing the specimens on a high-accuracy motorized translational stage, ground truth transformations are available. This results in a mean translational error between two adjacent volumes of 16.6 ± 0.8 μm (2.8 ± 0.13 voxels). To the author’s knowledge, this is the first reported stitching of multiple 3D OCT volumes by using dense voxel information in the registration process. The achieved results are sufficient for providing high accuracy OCT panoramic images. Combined with a recently available high-speed 4D OCT, our method enables interactive stitching of hand-guided acquisitions.
Collapse
Affiliation(s)
- Max-Heinrich Laves
- 1Institute of Mechatronic Systems, Appelstr. 11A, 30167Hannover, Germany
| | - Lüder A. Kahrs
- 1Institute of Mechatronic Systems, Appelstr. 11A, 30167Hannover, Germany
| | - Tobias Ortmaier
- 1Institute of Mechatronic Systems, Appelstr. 11A, 30167Hannover, Germany
| |
Collapse
|
12
|
Keller B, Draelos M, Tang G, Farsiu S, Kuo AN, Hauser K, Izatt JA. Real-time corneal segmentation and 3D needle tracking in intrasurgical OCT. BIOMEDICAL OPTICS EXPRESS 2018; 9:2716-2732. [PMID: 30258685 PMCID: PMC6154196 DOI: 10.1364/boe.9.002716] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 05/08/2018] [Accepted: 05/10/2018] [Indexed: 05/09/2023]
Abstract
Ophthalmic procedures demand precise surgical instrument control in depth, yet standard operating microscopes supply limited depth perception. Current commercial microscope-integrated optical coherence tomography partially meets this need with manually-positioned cross-sectional images that offer qualitative estimates of depth. In this work, we present methods for automatic quantitative depth measurement using real-time, two-surface corneal segmentation and needle tracking in OCT volumes. We then demonstrate these methods for guidance of ex vivo deep anterior lamellar keratoplasty (DALK) needle insertions. Surgeons using the output of these methods improved their ability to reach a target depth, and decreased their incidence of corneal perforations, both with statistical significance. We believe these methods could increase the success rate of DALK and thereby improve patient outcomes.
Collapse
Affiliation(s)
- Brenton Keller
- Department of Biomedical Engineering, Duke University, Durham, NC 27708,
USA
| | - Mark Draelos
- Department of Biomedical Engineering, Duke University, Durham, NC 27708,
USA
| | - Gao Tang
- Department of Mechanical Engineering, Duke University, Durham, NC 27708,
USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708,
USA
- Department of Ophthalmology, Duke University Medical Center, Durham NC 27710,
USA
| | - Anthony N. Kuo
- Department of Biomedical Engineering, Duke University, Durham, NC 27708,
USA
- Department of Ophthalmology, Duke University Medical Center, Durham NC 27710,
USA
| | - Kris Hauser
- Department of Electrical and Computer Engineering, Duke University, Durham, NC 27701,
USA
| | - Joseph A. Izatt
- Department of Ophthalmology, Duke University Medical Center, Durham NC 27710,
USA
| |
Collapse
|