1
|
Schmidt A, Mohareri O, DiMaio SP, Salcudean SE. Surgical Tattoos in Infrared: A Dataset for Quantifying Tissue Tracking and Mapping. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2634-2645. [PMID: 38437151 DOI: 10.1109/tmi.2024.3372828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Quantifying performance of methods for tracking and mapping tissue in endoscopic environments is essential for enabling image guidance and automation of medical interventions and surgery. Datasets developed so far either use rigid environments, visible markers, or require annotators to label salient points in videos after collection. These are respectively: not general, visible to algorithms, or costly and error-prone. We introduce a novel labeling methodology along with a dataset that uses said methodology, Surgical Tattoos in Infrared (STIR). STIR has labels that are persistent but invisible to visible spectrum algorithms. This is done by labelling tissue points with IR-fluorescent dye, indocyanine green (ICG), and then collecting visible light video clips. STIR comprises hundreds of stereo video clips in both in vivo and ex vivo scenes with start and end points labelled in the IR spectrum. With over 3,000 labelled points, STIR will help to quantify and enable better analysis of tracking and mapping methods. After introducing STIR, we analyze multiple different frame-based tracking methods on STIR using both 3D and 2D endpoint error and accuracy metrics. STIR is available at https://dx.doi.org/10.21227/w8g4-g548.
Collapse
|
2
|
Schmidt A, Mohareri O, DiMaio S, Yip MC, Salcudean SE. Tracking and mapping in medical computer vision: A review. Med Image Anal 2024; 94:103131. [PMID: 38442528 DOI: 10.1016/j.media.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/29/2024] [Indexed: 03/07/2024]
Abstract
As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.
Collapse
Affiliation(s)
- Adam Schmidt
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada.
| | - Omid Mohareri
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Simon DiMaio
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Michael C Yip
- Department of Electrical and Computer Engineering, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada
| |
Collapse
|
3
|
Liu S, Fan J, Yang Y, Xiao D, Ai D, Song H, Wang Y, Yang J. Monocular endoscopy images depth estimation with multi-scale residual fusion. Comput Biol Med 2024; 169:107850. [PMID: 38145602 DOI: 10.1016/j.compbiomed.2023.107850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 11/16/2023] [Accepted: 12/11/2023] [Indexed: 12/27/2023]
Abstract
BACKGROUND Monocular depth estimation plays a fundamental role in clinical endoscopy surgery. However, the coherent illumination, smooth surfaces, and texture-less nature of endoscopy images present significant challenges to traditional depth estimation methods. Existing approaches struggle to accurately perceive depth in such settings. METHOD To overcome these challenges, this paper proposes a novel multi-scale residual fusion method for estimating the depth of monocular endoscopy images. Specifically, we address the issue of coherent illumination by leveraging image frequency domain component space transformation, thereby enhancing the stability of the scene's light source. Moreover, we employ an image radiation intensity attenuation model to estimate the initial depth map. Finally, to refine the accuracy of depth estimation, we utilize a multi-scale residual fusion optimization technique. RESULTS To evaluate the performance of our proposed method, extensive experiments were conducted on public datasets. The structural similarity measures for continuous frames in three distinct clinical data scenes reached impressive values of 0.94, 0.82, and 0.84, respectively. These results demonstrate the effectiveness of our approach in capturing the intricate details of endoscopy images. Furthermore, the depth estimation accuracy achieved remarkable levels of 89.3 % and 91.2 % for the two models' data, respectively, underscoring the robustness of our method. CONCLUSIONS Overall, the promising results obtained on public datasets highlight the significant potential of our method for clinical applications, facilitating reliable depth estimation and enhancing the quality of endoscopy surgical procedures.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China; China Center for Information Industry Development, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yun Yang
- Department of General Surgery, Beijing Friendship Hospital, Capital Medical University, National Clinical Research Center for Digestive Diseases, Beijing 100050, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
4
|
Rodriguez Peñaranda N, Eissa A, Ferretti S, Bianchi G, Di Bari S, Farinha R, Piazza P, Checcucci E, Belenchón IR, Veccia A, Gomez Rivas J, Taratkin M, Kowalewski KF, Rodler S, De Backer P, Cacciamani GE, De Groote R, Gallagher AG, Mottrie A, Micali S, Puliatti S. Artificial Intelligence in Surgical Training for Kidney Cancer: A Systematic Review of the Literature. Diagnostics (Basel) 2023; 13:3070. [PMID: 37835812 PMCID: PMC10572445 DOI: 10.3390/diagnostics13193070] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/17/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
The prevalence of renal cell carcinoma (RCC) is increasing due to advanced imaging techniques. Surgical resection is the standard treatment, involving complex radical and partial nephrectomy procedures that demand extensive training and planning. Furthermore, artificial intelligence (AI) can potentially aid the training process in the field of kidney cancer. This review explores how artificial intelligence (AI) can create a framework for kidney cancer surgery to address training difficulties. Following PRISMA 2020 criteria, an exhaustive search of PubMed and SCOPUS databases was conducted without any filters or restrictions. Inclusion criteria encompassed original English articles focusing on AI's role in kidney cancer surgical training. On the other hand, all non-original articles and articles published in any language other than English were excluded. Two independent reviewers assessed the articles, with a third party settling any disagreement. Study specifics, AI tools, methodologies, endpoints, and outcomes were extracted by the same authors. The Oxford Center for Evidence-Based Medicine's evidence levels were employed to assess the studies. Out of 468 identified records, 14 eligible studies were selected. Potential AI applications in kidney cancer surgical training include analyzing surgical workflow, annotating instruments, identifying tissues, and 3D reconstruction. AI is capable of appraising surgical skills, including the identification of procedural steps and instrument tracking. While AI and augmented reality (AR) enhance training, challenges persist in real-time tracking and registration. The utilization of AI-driven 3D reconstruction proves beneficial for intraoperative guidance and preoperative preparation. Artificial intelligence (AI) shows potential for advancing surgical training by providing unbiased evaluations, personalized feedback, and enhanced learning processes. Yet challenges such as consistent metric measurement, ethical concerns, and data privacy must be addressed. The integration of AI into kidney cancer surgical training offers solutions to training difficulties and a boost to surgical education. However, to fully harness its potential, additional studies are imperative.
Collapse
Affiliation(s)
- Natali Rodriguez Peñaranda
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Ahmed Eissa
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
- Department of Urology, Faculty of Medicine, Tanta University, Tanta 31527, Egypt
| | - Stefania Ferretti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Giampaolo Bianchi
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Di Bari
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Rui Farinha
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Urology Department, Lusíadas Hospital, 1500-458 Lisbon, Portugal
| | - Pietro Piazza
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy;
| | - Enrico Checcucci
- Department of Surgery, FPO-IRCCS Candiolo Cancer Institute, 10060 Turin, Italy;
| | - Inés Rivero Belenchón
- Urology and Nephrology Department, Virgen del Rocío University Hospital, 41013 Seville, Spain;
| | - Alessandro Veccia
- Department of Urology, University of Verona, Azienda Ospedaliera Universitaria Integrata, 37126 Verona, Italy;
| | - Juan Gomez Rivas
- Department of Urology, Hospital Clinico San Carlos, 28040 Madrid, Spain;
| | - Mark Taratkin
- Institute for Urology and Reproductive Health, Sechenov University, 119435 Moscow, Russia;
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urosurgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany;
| | - Severin Rodler
- Department of Urology, University Hospital LMU Munich, 80336 Munich, Germany;
| | - Pieter De Backer
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium
| | - Giovanni Enrico Cacciamani
- USC Institute of Urology, Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089, USA;
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA 90089, USA
| | - Ruben De Groote
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Anthony G. Gallagher
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Faculty of Life and Health Sciences, Ulster University, Derry BT48 7JL, UK
| | - Alexandre Mottrie
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Salvatore Micali
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Puliatti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| |
Collapse
|
5
|
Liu Z, Gao W, Zhu J, Yu Z, Fu Y. Surface deformation tracking in monocular laparoscopic video. Med Image Anal 2023; 86:102775. [PMID: 36848721 DOI: 10.1016/j.media.2023.102775] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 02/17/2023] [Accepted: 02/18/2023] [Indexed: 02/23/2023]
Abstract
Image-guided surgery has been proven to enhance the accuracy and safety of minimally invasive surgery (MIS). Nonrigid deformation tracking of soft tissue is one of the main challenges in image-guided MIS owing to the existence of tissue deformation, homogeneous texture, smoke and instrument occlusion, etc. In this paper, we proposed a piecewise affine deformation model-based nonrigid deformation tracking method. A Markov random field based mask generation method is developed to eliminate tracking anomalies. The deformation information vanishes when the regular constraint is invalid, which further deteriorates the tracking accuracy. Atime-series deformation solidification mechanism is introduced to reduce the degradation of the deformation field of the model. For the quantitative evaluation of the proposed method, we synthesized nine laparoscopic videos mimicking instrument occlusion and tissue deformation. Quantitative tracking robustness was evaluated on the synthetic videos. Three real videos of MIS containing challenges of large-scale deformation, large-range smoke, instrument occlusion, and permanent changes in soft tissue texture were also used to evaluate the performance of the proposed method. Experimental results indicate the proposed method outperforms state-of-the-art methods in terms of accuracy and robustness, which shows good performance in image-guided MIS.
Collapse
Affiliation(s)
- Ziteng Liu
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Wenpeng Gao
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China.
| | - Jiahua Zhu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Zhi Yu
- School of Life Science and Technology, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, 2 Yikuang Str., Nangang District, Harbin, 150080, China.
| |
Collapse
|
6
|
Suture Looping Task Pose Planner in a Constrained Surgical Environment. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01772-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
7
|
Yang B, Xu S, Chen H, Zheng W, Liu C. Reconstruct Dynamic Soft-Tissue With Stereo Endoscope Based on a Single-Layer Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5828-5840. [PMID: 36054398 DOI: 10.1109/tip.2022.3202367] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In dynamic minimally invasive surgery environments, 3D reconstruction of deformable soft-tissue surfaces with stereo endoscopic images is very challenging. A simple self-supervised stereo reconstruction framework is proposed to address this issue, which bridges the traditional geometric deformable models and the newly revived neural networks. The equivalence between the classical thin plate spline (TPS) model and a single-layer fully-connected or convolutional network is studied. By alternating training of two TPS equivalent networks within the self-supervised framework, disparity priors are learnt from the past stereo frames of target tissues to form an optimized disparity basis, on which disparity maps of subsequent frames can be estimated more accurately without sacrificing computational efficiency and robustness. The proposed method was verified on stereo-endoscopic videos recorded by the da Vinci® surgical robots.
Collapse
|
8
|
Sun Y, Pan B, Fu Y. Correlation filters tissue tracking with application to robotic minimally invasive surgery. Int J Med Robot 2022; 18:e2440. [DOI: 10.1002/rcs.2440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 06/28/2022] [Accepted: 07/11/2022] [Indexed: 11/10/2022]
Affiliation(s)
- Yanwen Sun
- State Key Laboratory of Robotics and Systems Harbin Institute of Technology Harbin China
| | - Bo Pan
- State Key Laboratory of Robotics and Systems Harbin Institute of Technology Harbin China
| | - Yili Fu
- State Key Laboratory of Robotics and Systems Harbin Institute of Technology Harbin China
| |
Collapse
|
9
|
Fu T, Fan J, Liu D, Song H, Zhang C, Ai D, Cheng Z, Liang P, Yang J. Divergence-Free Fitting-Based Incompressible Deformation Quantification of Liver. IEEE J Biomed Health Inform 2021; 25:720-736. [PMID: 32750981 DOI: 10.1109/jbhi.2020.3013126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Liver is an incompressible organ that maintains its volume during the respiration-induced deformation. Quantifying this deformation with the incompressible constraint is significant for liver tracking. The constraint can be accomplished with retaining the divergence-free field obtained by the deformation decomposition. However, the decomposition process is time-consuming, and the removal of non-divergence-free field weakens the deformation. In this study, a divergence-free fitting-based registration method is proposed to quantify the incompressible deformation rapidly and accurately. First, the deformation to be estimated is mapped to the velocity in a diffeomorphic space. Then, this velocity is decomposed by a fast Fourier-based Hodge-Helmholtz decomposition to obtain the divergence-free, curl-free, and harmonic fields. The curl-free field is replaced and fitted by the obtained harmonic field with a translation field to generate a new divergence-free velocity. By optimizing this velocity, the final incompressible deformation is obtained. Moreover, a deep learning framework (DLF) is constructed to accelerate the incompressible deformation quantification. An incompressible respiratory motion model is built for the DLF by using the proposed registration method and is then used to augment the training data. An encoder-decoder network is introduced to learn appearance-velocity correlation at patch scale. In the experiment, we compare the proposed registration with three state-of-the-art methods. The results show that the proposed method can accurately achieve the incompressible registration of liver with a mean liver overlap ratio of 95.33%. Moreover, the time consumed by DLF is nearly 15 times shorter than that by other methods.
Collapse
|
10
|
SuPer: A Surgical Perception Framework for Endoscopic Tissue Manipulation With Surgical Robotics. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2970659] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
11
|
Edgcumbe P, Singla R, Pratt P, Schneider C, Nguan C, Rohling R. Follow the light: projector-based augmented reality intracorporeal system for laparoscopic surgery. J Med Imaging (Bellingham) 2018; 5:021216. [PMID: 29487888 DOI: 10.1117/1.jmi.5.2.021216] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 01/22/2018] [Indexed: 01/20/2023] Open
Abstract
A projector-based augmented reality intracorporeal system (PARIS) is presented that includes a miniature tracked projector, tracked marker, and laparoscopic ultrasound (LUS) transducer. PARIS was developed to improve the efficacy and safety of laparoscopic partial nephrectomy (LPN). In particular, it has been demonstrated to effectively assist in the identification of tumor boundaries during surgery and to improve the surgeon's understanding of the underlying anatomy. PARIS achieves this by displaying the orthographic projection of the cancerous tumor on the kidney's surface. The performance of PARIS was evaluated in a user study with two surgeons who performed 32 simulated robot-assisted partial nephrectomies. They performed 16 simulated partial nephrectomies with PARIS for guidance and 16 simulated partial nephrectomies with only an LUS transducer for guidance. With PARIS, there was a significant reduction [30% ([Formula: see text])] in the amount of healthy tissue excised and a trend toward a more accurate dissection around the tumor and more negative margins. The combined point tracking and reprojection root-mean-square error of PARIS was 0.8 mm. PARIS' proven ability to improve key metrics of LPN surgery and qualitative feedback from surgeons about PARIS supports the hypothesis that it is an effective surgical navigation tool.
Collapse
Affiliation(s)
- Philip Edgcumbe
- University of British Columbia, MD/PhD Program, Vancouver, Canada
| | - Rohit Singla
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Philip Pratt
- Imperial College London, Department of Surgery and Cancer, London, United Kingdom
| | - Caitlin Schneider
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Christopher Nguan
- University of British Columbia, Department of Urological Sciences, Vancouver, Canada
| | - Robert Rohling
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada.,University of British Columbia, Department of Mechanical Engineering, Vancouver, Canada
| |
Collapse
|
12
|
Penza V, Du X, Stoyanov D, Forgione A, Mattos LS, De Momi E. Long Term Safety Area Tracking (LT-SAT) with online failure detection and recovery for robotic minimally invasive surgery. Med Image Anal 2018; 45:13-23. [PMID: 29329053 DOI: 10.1016/j.media.2017.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 09/18/2017] [Accepted: 12/19/2017] [Indexed: 11/29/2022]
Abstract
Despite the benefits introduced by robotic systems in abdominal Minimally Invasive Surgery (MIS), major complications can still affect the outcome of the procedure, such as intra-operative bleeding. One of the causes is attributed to accidental damages to arteries or veins by the surgical tools, and some of the possible risk factors are related to the lack of sub-surface visibilty. Assistive tools guiding the surgical gestures to prevent these kind of injuries would represent a relevant step towards safer clinical procedures. However, it is still challenging to develop computer vision systems able to fulfill the main requirements: (i) long term robustness, (ii) adaptation to environment/object variation and (iii) real time processing. The purpose of this paper is to develop computer vision algorithms to robustly track soft tissue areas (Safety Area, SA), defined intra-operatively by the surgeon based on the real-time endoscopic images, or registered from a pre-operative surgical plan. We propose a framework to combine an optical flow algorithm with a tracking-by-detection approach in order to be robust against failures caused by: (i) partial occlusion, (ii) total occlusion, (iii) SA out of the field of view, (iv) deformation, (v) illumination changes, (vi) abrupt camera motion, (vii), blur and (viii) smoke. A Bayesian inference-based approach is used to detect the failure of the tracker, based on online context information. A Model Update Strategy (MUpS) is also proposed to improve the SA re-detection after failures, taking into account the changes of appearance of the SA model due to contact with instruments or image noise. The performance of the algorithm was assessed on two datasets, representing ex-vivo organs and in-vivo surgical scenarios. Results show that the proposed framework, enhanced with MUpS, is capable of maintain high tracking performance for extended periods of time ( ≃ 4 min - containing the aforementioned events) with high precision (0.7) and recall (0.8) values, and with a recovery time after a failure between 1 and 8 frames in the worst case.
Collapse
Affiliation(s)
- Veronica Penza
- Department of Electronics Information and Bioengineering, Politecnico di Milano, P.zza L. Da Vinci, 32, Milano 20133, Italy; Department of Advanced Robotics, Istituto Italiano di Tecnologia, via Morego, 30, Genova, 16163, Italy.
| | - Xiaofei Du
- Centre for Medical Image Computing, Department of Computer Science, University College London, United Kingdom
| | - Danail Stoyanov
- Centre for Medical Image Computing, Department of Computer Science, University College London, United Kingdom
| | - Antonello Forgione
- Ospedale Niguarda Ca' Granda, P.zza Dell'Ospedale Maggiore, 3, Milano 20162, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, via Morego, 30, Genova, 16163, Italy
| | - Elena De Momi
- Department of Electronics Information and Bioengineering, Politecnico di Milano, P.zza L. Da Vinci, 32, Milano 20133, Italy
| |
Collapse
|
13
|
Marmol A, Peynot T, Eriksson A, Jaiprakash A, Roberts J, Crawford R. Evaluation of Keypoint Detectors and Descriptors in Arthroscopic Images for Feature-Based Matching Applications. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2714150] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
Detmer FJ, Hettig J, Schindele D, Schostak M, Hansen C. Virtual and Augmented Reality Systems for Renal Interventions: A Systematic Review. IEEE Rev Biomed Eng 2017; 10:78-94. [PMID: 28885161 DOI: 10.1109/rbme.2017.2749527] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
PURPOSE Many virtual and augmented reality systems have been proposed to support renal interventions. This paper reviews such systems employed in the treatment of renal cell carcinoma and renal stones. METHODS A systematic literature search was performed. Inclusion criteria were virtual and augmented reality systems for radical or partial nephrectomy and renal stone treatment, excluding systems solely developed or evaluated for training purposes. RESULTS In total, 52 research papers were identified and analyzed. Most of the identified literature (87%) deals with systems for renal cell carcinoma treatment. About 44% of the systems have already been employed in clinical practice, but only 20% in studies with ten or more patients. Main challenges remaining for future research include the consideration of organ movement and deformation, human factor issues, and the conduction of large clinical studies. CONCLUSION Augmented and virtual reality systems have the potential to improve safety and outcomes of renal interventions. In the last ten years, many technical advances have led to more sophisticated systems, which are already applied in clinical practice. Further research is required to cope with current limitations of virtual and augmented reality assistance in clinical environments.
Collapse
|
15
|
Schoob A, Kundrat D, Kahrs LA, Ortmaier T. Stereo vision-based tracking of soft tissue motion with application to online ablation control in laser microsurgery. Med Image Anal 2017. [PMID: 28624755 DOI: 10.1016/j.media.2017.06.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Recent research has revealed that image-based methods can enhance accuracy and safety in laser microsurgery. In this study, non-rigid tracking using surgical stereo imaging and its application to laser ablation is discussed. A recently developed motion estimation framework based on piecewise affine deformation modeling is extended by a mesh refinement step and considering texture information. This compensates for tracking inaccuracies potentially caused by inconsistent feature matches or drift. To facilitate online application of the method, computational load is reduced by concurrent processing and affine-invariant fusion of tracking and refinement results. The residual latency-dependent tracking error is further minimized by Kalman filter-based upsampling, considering a motion model in disparity space. Accuracy is assessed in laparoscopic, beating heart, and laryngeal sequences with challenging conditions, such as partial occlusions and significant deformation. Performance is compared with that of state-of-the-art methods. In addition, the online capability of the method is evaluated by tracking two motion patterns performed by a high-precision parallel-kinematic platform. Related experiments are discussed for tissue substitute and porcine soft tissue in order to compare performances in an ideal scenario and in a setup mimicking clinical conditions. Regarding the soft tissue trial, the tracking error can be significantly reduced from 0.72 mm to below 0.05 mm with mesh refinement. To demonstrate online laser path adaptation during ablation, the non-rigid tracking framework is integrated into a setup consisting of a surgical Er:YAG laser, a three-axis scanning unit, and a low-noise stereo camera. Regardless of the error source, such as laser-to-camera registration, camera calibration, image-based tracking, and scanning latency, the ablation root mean square error is kept below 0.21 mm when the sample moves according to the aforementioned patterns. Final experiments regarding motion-compensated laser ablation of structurally deforming tissue highlight the potential of the method for vision-guided laser surgery.
Collapse
Affiliation(s)
- Andreas Schoob
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstr. 11a, 30167 Hanover, Germany.
| | - Dennis Kundrat
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstr. 11a, 30167 Hanover, Germany
| | - Lüder A Kahrs
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstr. 11a, 30167 Hanover, Germany
| | - Tobias Ortmaier
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstr. 11a, 30167 Hanover, Germany
| |
Collapse
|
16
|
Penza V, De Momi E, Enayati N, Chupin T, Ortiz J, Mattos LS. EnViSoRS: Enhanced Vision System for Robotic Surgery. A User-Defined Safety Volume Tracking to Minimize the Risk of Intraoperative Bleeding. Front Robot AI 2017. [DOI: 10.3389/frobt.2017.00015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
17
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
18
|
Che C, Mathai TS, Galeotti J. Ultrasound registration: A review. Methods 2017; 115:128-143. [DOI: 10.1016/j.ymeth.2016.12.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2016] [Revised: 12/07/2016] [Accepted: 12/08/2016] [Indexed: 11/29/2022] Open
|
19
|
Soft tissue motion tracking with application to tablet-based incision planning in laser surgery. Int J Comput Assist Radiol Surg 2016; 11:2325-2337. [PMID: 27250855 DOI: 10.1007/s11548-016-1420-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2015] [Accepted: 05/09/2016] [Indexed: 10/21/2022]
Abstract
PURPOSE Recent research has revealed that incision planning in laser surgery deploying stylus and tablet outperforms micromanipulator control. However, vision-based adaption to dynamic surgical scenes has not been addressed so far. In this study, scene motion compensation for tablet-based planning by means of tissue deformation tracking is discussed. METHODS A stereo-based method for motion tracking with piecewise affine deformation modeling is presented. Proposed parametrization relies on the epipolar constraint to enforce left-right consistency in the energy minimization problem. Furthermore, the method implements illumination-invariant tracking and appearance-based occlusion detection. Performance is assessed on laparoscopic and laryngeal in vivo data. In particular, tracking accuracy is measured under various conditions such as occlusions and simulated laser cuttings. Experimental validation is extended to a user study conducted on a tablet-based interface that integrates the tracking for image stabilization. RESULTS Tracking accuracy measurement reveals a root-mean-square error of 2.45 mm for the laparoscopic and 0.41 mm for the laryngeal dataset. Results successfully demonstrate stereoscopic tracking under changes in illumination, translation, rotation and scale. In particular, proposed occlusion detection scheme can increase robustness against tracking failure. Moreover, assessed user performance indicates significantly increased path tracing accuracy and usability if proposed tracking is deployed to stabilize the view during free-hand path definition. CONCLUSION The presented algorithm successfully extends piecewise affine deformation tracking to stereo vision taking the epipolar constraint into account. Improved surgical performance as demonstrated for laser incision planning highlights the potential of presented method regarding further applications in computer-assisted surgery.
Collapse
|
20
|
Haouchine N, Dequidt J, Berger MO, Cotin S. Monocular 3D Reconstruction and Augmentation of Elastic Surfaces with Self-Occlusion Handling. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:1363-1376. [PMID: 26529459 DOI: 10.1109/tvcg.2015.2452905] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper focuses on the 3D shape recovery and augmented reality on elastic objects with self-occlusions handling, using only single view images. Shape recovery from a monocular video sequence is an underconstrained problem and many approaches have been proposed to enforce constraints and resolve the ambiguities. State-of-the art solutions enforce smoothness or geometric constraints, consider specific deformation properties such as inextensibility or resort to shading constraints. However, few of them can handle properly large elastic deformations. We propose in this paper a real-time method that uses a mechanical model and able to handle highly elastic objects. The problem is formulated as an energy minimization problem accounting for a non-linear elastic model constrained by external image points acquired from a monocular camera. This method prevents us from formulating restrictive assumptions and specific constraint terms in the minimization. In addition, we propose to handle self-occluded regions thanks to the ability of mechanical models to provide appropriate predictions of the shape. Our method is compared to existing techniques with experiments conducted on computer-generated and real data that show the effectiveness of recovering and augmenting 3D elastic objects. Additionally, experiments in the context of minimally invasive liver surgery are also provided and results on deformations with the presence of self-occlusions are exposed.
Collapse
|
21
|
Loukas C, Georgiou E. Performance comparison of various feature detector-descriptors and temporal models for video-based assessment of laparoscopic skills. Int J Med Robot 2015; 12:387-98. [PMID: 26415583 DOI: 10.1002/rcs.1702] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Revised: 07/17/2015] [Accepted: 08/21/2015] [Indexed: 11/07/2022]
Abstract
BACKGROUND Despite the significant progress in hand gesture analysis for surgical skills assessment, video-based analysis has not received much attention. In this study we investigate the application of various feature detector-descriptors and temporal modeling techniques for laparoscopic skills assessment. METHODS Two different setups were designed: static and dynamic video-histogram analysis. Four well-known feature detection-extraction methods were investigated: SIFT, SURF, STAR-BRIEF and STIP-HOG. For the dynamic setup two temporal models were employed (LDS and GMMAR model). Each method was evaluated for its ability to classify experts and novices on peg transfer and knot tying. RESULTS STIP-HOG yielded the best performance (static: 74-79%; dynamic: 80-89%). Temporal models had equivalent performance. Important differences were found between the two groups with respect to the underlying dynamics of the video-histogram sequences. CONCLUSIONS Temporal modeling of feature histograms extracted from laparoscopic training videos provides information about the skill level and motion pattern of the operator. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Constantinos Loukas
- Medical Physics Lab-Simulation Center, School of Medicine, University of Athens, Greece
| | - Evangelos Georgiou
- Medical Physics Lab-Simulation Center, School of Medicine, University of Athens, Greece
| |
Collapse
|
22
|
Lin B, Sun Y, Qian X, Goldgof D, Gitlin R, You Y. Video‐based 3D reconstruction, laparoscope localization and deformation recovery for abdominal minimally invasive surgery: a survey. Int J Med Robot 2015; 12:158-78. [DOI: 10.1002/rcs.1661] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2015] [Indexed: 11/07/2022]
Affiliation(s)
- Bingxiong Lin
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Yu Sun
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Xiaoning Qian
- Department of Electrical and Computer Engineering Texas A&M University College Station TX USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Richard Gitlin
- Department of Electrical Engineering University of South Florida Tampa FL USA
| | - Yuncheng You
- Department of Mathematics and Statistics University of South Florida Tampa FL USA
| |
Collapse
|
23
|
Lin B, Sun Y, Sanchez JE, Qian X. Efficient Vessel Feature Detection for Endoscopic Image Analysis. IEEE Trans Biomed Eng 2015; 62:1141-50. [DOI: 10.1109/tbme.2014.2373273] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
24
|
Selka F, Nicolau S, Agnus V, Bessaid A, Marescaux J, Soler L. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology. Comput Med Imaging Graph 2015; 40:49-61. [PMID: 25542640 DOI: 10.1016/j.compmedimag.2014.11.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Revised: 09/16/2014] [Accepted: 11/20/2014] [Indexed: 10/24/2022]
Abstract
In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost.
Collapse
Affiliation(s)
- F Selka
- Biomedical Engineering Laboratory, Sciences Engineering Faculty, Abou Bekr Belkaid University, Tlemcen, Algeria; Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France.
| | - S Nicolau
- Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France.
| | - V Agnus
- Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France.
| | - A Bessaid
- Biomedical Engineering Laboratory, Sciences Engineering Faculty, Abou Bekr Belkaid University, Tlemcen, Algeria.
| | - J Marescaux
- Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France; IHU 1 place de l'Hopital, Strasbourg, France.
| | - L Soler
- Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France; IHU 1 place de l'Hopital, Strasbourg, France.
| |
Collapse
|
25
|
Wang B, Hu W, Liu J, Si J, Duan H. Gastroscopic image graph: application to noninvasive multitarget tracking under gastroscopy. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2014; 2014:974038. [PMID: 25214891 PMCID: PMC4158259 DOI: 10.1155/2014/974038] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Revised: 08/09/2014] [Accepted: 08/09/2014] [Indexed: 11/17/2022]
Abstract
Gastroscopic examination is one of the most common methods for gastric disease diagnosis. In this paper, a multitarget tracking approach is proposed to assist endoscopists in identifying lesions under gastroscopy. This approach analyzes numerous preobserved gastroscopic images and constructs a gastroscopic image graph. In this way, the deformation registration between gastroscopic images is regarded as a graph search problem. During the procedure, the endoscopist marks suspicious lesions on the screen and the graph is utilized to locate and display the lesions in the appropriate frames based on the calculated registration model. Compared to traditional gastroscopic lesion surveillance methods (e.g., tattooing or probe-based optical biopsy), this approach is noninvasive and does not require additional instruments. In order to assess and quantify the performance, this approach was applied to stomach phantom data and in vivo data. The clinical experimental results demonstrated that the accuracy at angularis, antral, and stomach body was 6.3 ± 2.4 mm, 7.6 ± 3.1 mm, and 7.9 ± 1.6 mm, respectively. The mean accuracy was 7.31 mm, average targeting time was 56 ms, and the P value was 0.032, which makes it an attractive candidate for clinical practice. Furthermore, this approach provides a significant reference for endoscopic target tracking of other soft tissue organs.
Collapse
Affiliation(s)
- Bin Wang
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang 310027, China
- Key Laboratory for Biomedical Engineering, Ministry of Education, Hangzhou, Zhejiang 310027, China
| | - Weiling Hu
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, Zhejiang 310016, China
| | - Jiquan Liu
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang 310027, China
- Key Laboratory for Biomedical Engineering, Ministry of Education, Hangzhou, Zhejiang 310027, China
| | - Jianmin Si
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, Zhejiang 310016, China
| | - Huilong Duan
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang 310027, China
- Key Laboratory for Biomedical Engineering, Ministry of Education, Hangzhou, Zhejiang 310027, China
| |
Collapse
|
26
|
Kumar AN, Miga MI, Pheiffer TS, Chambless LB, Thompson RC, Dawant BM. Persistent and automatic intraoperative 3D digitization of surfaces under dynamic magnifications of an operating microscope. Med Image Anal 2014; 19:30-45. [PMID: 25189364 DOI: 10.1016/j.media.2014.07.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Revised: 07/22/2014] [Accepted: 07/23/2014] [Indexed: 12/15/2022]
Abstract
One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patient's preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1 Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (∼1 h) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28-0.81 mm range on the phantom object and in the 0.54-1.35 mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patient's preoperative images and facilitate active surgical guidance.
Collapse
Affiliation(s)
- Ankur N Kumar
- Vanderbilt University, Department of Electrical Engineering, Nashville, TN 37235, USA
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37235, USA
| | - Thomas S Pheiffer
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37235, USA
| | - Lola B Chambless
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, TN 37232, USA
| | - Reid C Thompson
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, TN 37232, USA
| | - Benoit M Dawant
- Vanderbilt University, Department of Electrical Engineering, Nashville, TN 37235, USA
| |
Collapse
|
27
|
Selka F, Agnus V, Nicolau S, Bessaid A, Soler L, Marescaux J, Diana M. Fluorescence-Based Enhanced Reality for Colorectal Endoscopic Surgery. BIOMEDICAL IMAGE REGISTRATION 2014. [DOI: 10.1007/978-3-319-08554-8_12] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
28
|
Plantefève R, Haouchine N, Radoux JP, Cotin S. Automatic Alignment of Pre and Intraoperative Data Using Anatomical Landmarks for Augmented Laparoscopic Liver Surgery. BIOMEDICAL SIMULATION 2014. [DOI: 10.1007/978-3-319-12057-7_7] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
|
29
|
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery. Med Image Anal 2013; 17:974-96. [DOI: 10.1016/j.media.2013.04.003] [Citation(s) in RCA: 182] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2012] [Revised: 04/05/2013] [Accepted: 04/12/2013] [Indexed: 12/16/2022]
|
30
|
Hughes-Hallett A, Mayer EK, Marcus HJ, Cundy TP, Pratt PJ, Darzi AW, Vale JA. Augmented reality partial nephrectomy: examining the current status and future perspectives. Urology 2013; 83:266-73. [PMID: 24149104 DOI: 10.1016/j.urology.2013.08.049] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2013] [Revised: 06/28/2013] [Accepted: 08/24/2013] [Indexed: 10/26/2022]
Abstract
A minimal access approach to partial nephrectomy has historically been under-utilized, but is now becoming more popular with the growth of robot-assisted laparoscopy. One of the criticisms of minimal access partial nephrectomy is the loss of haptic feedback. Augmented reality operating environments are forecast to play a major enabling role in the future of minimal access partial nephrectomy by integrating enhanced visual information to supplement this loss of haptic sensation. In this article, we systematically examine the current status of augmented reality in partial nephrectomy by identifying existing research challenges and exploring future agendas for this technology to achieve wider clinical translation.
Collapse
Affiliation(s)
| | - Erik K Mayer
- Department of Surgery and Cancer, Imperial College London, United Kingdom.
| | - Hani J Marcus
- The Hamlyn Centre, Institute of Global Health Innovation, Imperial College London, United Kingdom
| | - Thomas P Cundy
- The Hamlyn Centre, Institute of Global Health Innovation, Imperial College London, United Kingdom
| | - Philip J Pratt
- The Hamlyn Centre, Institute of Global Health Innovation, Imperial College London, United Kingdom
| | - Ara W Darzi
- Department of Surgery and Cancer, Imperial College London, United Kingdom; The Hamlyn Centre, Institute of Global Health Innovation, Imperial College London, United Kingdom
| | - Justin A Vale
- Department of Surgery and Cancer, Imperial College London, United Kingdom
| |
Collapse
|
31
|
O'Doherty J, Henricson J, Falk M, Anderson CD. Correcting for possible tissue distortion between provocation and assessment in skin testing: the divergent beam UVB photo-test. Skin Res Technol 2013; 19:368-74. [PMID: 23551145 DOI: 10.1111/srt.12055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/07/2013] [Indexed: 11/29/2022]
Abstract
BACKGROUND In tissue viability imaging (TiVi), an assessment method for skin erythema, correct orientation of skin position from provocation to assessment optimizes data interpretation. Image processing algorithms could compensate for the effects of skin translation, torsion and rotation realigning assessment images to the position of the skin at provocation. METHODS A reference image of a divergent, UVB phototest was acquired, as well as test images at varying levels of translation, rotation and torsion. Using 12 skin markers, an algorithm was applied to restore the distorted test images to the reference image. RESULTS The algorithm corrected torsion and rotation up to approximately 35 degrees. The radius of the erythemal reaction and average value of the input image closely matched that of the reference image's 'true value'. CONCLUSION The image 'de-warping' procedure improves the robustness of the response image evaluation in a clinical research setting and opens the possibility of the correction of possibly flawed images performed away from the laboratory setting by the subject/patient themselves. This opportunity may increase the use of photo-testing and, by extension, other late response skin testing where the necessity of a return assessment visit is a disincentive to performance of the test.
Collapse
Affiliation(s)
- Jim O'Doherty
- Division of Imaging Sciences & Biomedical Engineering, King's College London, St Thomas' Hospital, London, SE1 7EH, United Kingdom
| | | | | | | |
Collapse
|