1
|
Ma L, Tomii N, Wang J, Kiyomatsu H, Tsukihara H, Kobayashi E, Sakuma I. Robust and fast laparoscopic vision-based ultrasound probe tracking using a binary dot array marker. Comput Biol Med 2022; 145:105406. [PMID: 35339847 DOI: 10.1016/j.compbiomed.2022.105406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/04/2022] [Accepted: 03/11/2022] [Indexed: 11/16/2022]
Abstract
Laparoscopic vision-based ultrasound probe tracking systems have gained considerable attention in ultrasound-guided laparoscopic surgeries as replacements for external tracking systems (e.g. optical tracking and electromagnetic tracking systems), which increase cost and setting time, require additional operation space, and introduce new limitations. Most existing laparoscopic ultrasound (LUS) probe tracking systems rely on fiducial markers, which cannot easily realise fast and robust vision-based tracking in laparoscopic surgery owing to their design limitations. Therefore, we propose a novel binary dot array marker to realise a robust and fast LUS probe tracking system. The binary dot array marker comprises two dots (green and blue), which form multiple unique identification dot subarrays in the binary dot array. The binary dot array marker can be tracked when one of the identification dot subarrays is detected and identified; this novel design makes the binary dot array marker-based probe tracking system robust against occlusions during surgery. The evaluation results indicate that the proposed binary dot marker performs better in terms of robustness, computational efficiency, and tracking accuracy compared to the state-of-the-art fiducial markers used for vision-based probe tracking.
Collapse
Affiliation(s)
- Lei Ma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Naoki Tomii
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- School of Mechanical Engineering, Beihang University, Beijing, China
| | | | | | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
2
|
Azargoshasb S, van Alphen S, Slof LJ, Rosiello G, Puliatti S, van Leeuwen SI, Houwing KM, Boonekamp M, Verhart J, Dell'Oglio P, van der Hage J, van Oosterom MN, van Leeuwen FWB. The Click-On gamma probe, a second-generation tethered robotic gamma probe that improves dexterity and surgical decision-making. Eur J Nucl Med Mol Imaging 2021; 48:4142-4151. [PMID: 34031721 PMCID: PMC8566398 DOI: 10.1007/s00259-021-05387-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 04/25/2021] [Indexed: 11/24/2022]
Abstract
Purpose Decision-making and dexterity, features that become increasingly relevant in (robot-assisted) minimally invasive surgery, are considered key components in improving the surgical accuracy. Recently, DROP-IN gamma probes were introduced to facilitate radioguided robotic surgery. We now studied if robotic DROP-IN radioguidance can be further improved using tethered Click-On designs that integrate gamma detection onto the robotic instruments themselves. Methods Using computer-assisted drawing software, 3D printing and precision machining, we created a Click-On probe containing two press-fit connections and an additional grasping moiety for a ProGrasp instrument combined with fiducials that could be video tracked using the Firefly laparoscope. Using a dexterity phantom, the duration of the specific tasks and the path traveled could be compared between use of the Click-On or DROP-IN probe. To study the impact on surgical decision-making, we performed a blinded study, in porcine models, wherein surgeons had to identify a hidden 57Co-source using either palpation or Click-On radioguidance. Results When assembled onto a ProGrasp instrument, while preserving grasping function and rotational freedom, the fully functional prototype could be inserted through a 12-mm trocar. In dexterity assessments, the Click-On provided a 40% reduction in movements compared to the DROP-IN, which converted into a reduction in time, path length, and increase in straightness index. Radioguidance also improved decision-making; task-completion rate increased by 60%, procedural time was reduced, and movements became more focused. Conclusion The Click-On gamma probe provides a step toward full integration of radioguidance in minimal invasive surgery. The value of this concept was underlined by its impact on surgical dexterity and decision-making.
Collapse
Affiliation(s)
- Samaneh Azargoshasb
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Simon van Alphen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Leon J Slof
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Giuseppe Rosiello
- Department of Urology and Division of Experimental Oncology, Urological Research Institute IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Stefano Puliatti
- Department of Urology, University of Modena and Reggio Emilia, Via del Pozzo, 71, 41124, Modena, Italy.,ORSI Academy, Melle, Belgium.,Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Sven I van Leeuwen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Krijn M Houwing
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Michael Boonekamp
- Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Jeroen Verhart
- Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Paolo Dell'Oglio
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands.,ORSI Academy, Melle, Belgium.,Department of Urology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Jos van der Hage
- Department of Surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Matthias N van Oosterom
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Fijs W B van Leeuwen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands. .,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands. .,ORSI Academy, Melle, Belgium.
| |
Collapse
|
3
|
Novel Multimodal, Multiscale Imaging System with Augmented Reality. Diagnostics (Basel) 2021; 11:diagnostics11030441. [PMID: 33806547 PMCID: PMC7999725 DOI: 10.3390/diagnostics11030441] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/19/2021] [Accepted: 02/21/2021] [Indexed: 01/23/2023] Open
Abstract
A novel multimodal, multiscale imaging system with augmented reality capability were developed and characterized. The system offers 3D color reflectance imaging, 3D fluorescence imaging, and augmented reality in real time. Multiscale fluorescence imaging was enabled by developing and integrating an in vivo fiber-optic microscope. Real-time ultrasound-fluorescence multimodal imaging used optically tracked fiducial markers for registration. Tomographical data are also incorporated using optically tracked fiducial markers for registration. Furthermore, we characterized system performance and registration accuracy in a benchtop setting. The multiscale fluorescence imaging facilitated assessing the functional status of tissues, extending the minimal resolution of fluorescence imaging to ~17.5 µm. The system achieved a mean of Target Registration error of less than 2 mm for registering fluorescence images to ultrasound images and MRI-based 3D model, which is within clinically acceptable range. The low latency and high frame rate of the prototype system has shown the promise of applying the reported techniques in clinically relevant settings in the future.
Collapse
|
4
|
Hasan MK, Calvet L, Rabbani N, Bartoli A. Detection, segmentation, and 3D pose estimation of surgical tools using convolutional neural networks and algebraic geometry. Med Image Anal 2021; 70:101994. [PMID: 33611053 DOI: 10.1016/j.media.2021.101994] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 01/27/2021] [Accepted: 02/01/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Surgical tool detection, segmentation, and 3D pose estimation are crucial components in Computer-Assisted Laparoscopy (CAL). The existing frameworks have two main limitations. First, they do not integrate all three components. Integration is critical; for instance, one should not attempt computing pose if detection is negative. Second, they have highly specific requirements, such as the availability of a CAD model. We propose an integrated and generic framework whose sole requirement for the 3D pose is that the tool shaft is cylindrical. Our framework makes the most of deep learning and geometric 3D vision by combining a proposed Convolutional Neural Network (CNN) with algebraic geometry. We show two applications of our framework in CAL: tool-aware rendering in Augmented Reality (AR) and tool-based 3D measurement. METHODS We name our CNN as ART-Net (Augmented Reality Tool Network). It has a Single Input Multiple Output (SIMO) architecture with one encoder and multiple decoders to achieve detection, segmentation, and geometric primitive extraction. These primitives are the tool edge-lines, mid-line, and tip. They allow the tool's 3D pose to be estimated by a fast algebraic procedure. The framework only proceeds if a tool is detected. The accuracy of segmentation and geometric primitive extraction is boosted by a new Full resolution feature map Generator (FrG). We extensively evaluate the proposed framework with the EndoVis and new proposed datasets. We compare the segmentation results against several variants of the Fully Convolutional Network (FCN) and U-Net. Several ablation studies are provided for detection, segmentation, and geometric primitive extraction. The proposed datasets are surgery videos of different patients. RESULTS In detection, ART-Net achieves 100.0% in both average precision and accuracy. In segmentation, it achieves 81.0% in mean Intersection over Union (mIoU) on the robotic EndoVis dataset (articulated tool), where it outperforms both FCN and U-Net, by 4.5pp and 2.9pp, respectively. It achieves 88.2% in mIoU on the remaining datasets (non-articulated tool). In geometric primitive extraction, ART-Net achieves 2.45∘ and 2.23∘ in mean Arc Length (mAL) error for the edge-lines and mid-line, respectively, and 9.3 pixels in mean Euclidean distance error for the tool-tip. Finally, in terms of 3D pose evaluated on animal data, our framework achieves 1.87 mm, 0.70 mm, and 4.80 mm mean absolute errors on the X,Y, and Z coordinates, respectively, and 5.94∘ angular error on the shaft orientation. It achieves 2.59 mm and 1.99 mm in mean and median location error of the tool head evaluated on patient data. CONCLUSIONS The proposed framework outperforms existing ones in detection and segmentation. Compared to separate networks, integrating the tasks in a single network preserves accuracy in detection and segmentation but substantially improves accuracy in geometric primitive extraction. Overall, our framework has similar or better accuracy in 3D pose estimation while largely improving robustness against the very challenging imaging conditions of laparoscopy. The source code of our framework and our annotated dataset will be made publicly available at https://github.com/kamruleee51/ART-Net.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- EnCoV, Institut Pascal, UMR 6602 CNRS/Université Clermont-Auvergne, Clermont-Ferrand, France; Department of Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna 9203, Bangladesh.
| | - Lilian Calvet
- EnCoV, Institut Pascal, UMR 6602 CNRS/Université Clermont-Auvergne, Clermont-Ferrand, France
| | - Navid Rabbani
- EnCoV, Institut Pascal, UMR 6602 CNRS/Université Clermont-Auvergne, Clermont-Ferrand, France
| | - Adrien Bartoli
- EnCoV, Institut Pascal, UMR 6602 CNRS/Université Clermont-Auvergne, Clermont-Ferrand, France
| |
Collapse
|
5
|
Liu X, Plishker W, Shekhar R. Hybrid electromagnetic-ArUco tracking of laparoscopic ultrasound transducer in laparoscopic video. J Med Imaging (Bellingham) 2021; 8:015001. [PMID: 33585664 PMCID: PMC7857492 DOI: 10.1117/1.jmi.8.1.015001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 01/12/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: The purpose of this work was to develop a new method of tracking a laparoscopic ultrasound (LUS) transducer in laparoscopic video by combining the hardware [e.g., electromagnetic (EM)] and the computer vision-based (e.g., ArUco) tracking methods. Approach: We developed a special tracking mount for the imaging tip of the LUS transducer. The mount incorporated an EM sensor and an ArUco pattern registered to it. The hybrid method used ArUco tracking for ArUco-success frames (i.e., frames where ArUco succeeds in detecting the pattern) and used corrected EM tracking for the ArUco-failure frames. The corrected EM tracking result was obtained by applying correction matrices to the original EM tracking result. The correction matrices were calculated in previous ArUco-success frames by comparing the ArUco result and the original EM tracking result. Results: We performed phantom and animal studies to evaluate the performance of our hybrid tracking method. The corrected EM tracking results showed significant improvements over the original EM tracking results. In the animal study, 59.2% frames were ArUco-success frames. For the ArUco-failure frames, mean reprojection errors for the original EM tracking method and for the corrected EM tracking method were 30.8 pixel and 10.3 pixel, respectively. Conclusions: The new hybrid method is more reliable than using ArUco tracking alone and more accurate and practical than using EM tracking alone for tracking the LUS transducer in the laparoscope camera image. The proposed method has the potential to significantly improve tracking performance for LUS-based augmented reality applications.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, United States
| | | | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, United States.,IGI Technologies, Inc., Silver Spring, Maryland, United States
| |
Collapse
|
6
|
Ma L, Wang J, Kiyomatsu H, Tsukihara H, Sakuma I, Kobayashi E. Surgical navigation system for laparoscopic lateral pelvic lymph node dissection in rectal cancer surgery using laparoscopic-vision-tracked ultrasonic imaging. Surg Endosc 2020; 35:6556-6567. [PMID: 33185764 DOI: 10.1007/s00464-020-08153-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/04/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND Laparoscopic lateral pelvic lymph node dissection (LPLND) in rectal cancer surgery requires considerable skill because the pelvic arteries, which need to be located to guide the dissection, are covered by other tissues and cannot be observed on laparoscopic views. Therefore, surgeons need to localize the pelvic arteries accurately before dissection, to prevent injury to these arteries. METHODS This report proposes a surgical navigation system to facilitate artery localization in laparoscopic LPLND by combining ultrasonic imaging and laparoscopy. Specifically, free-hand laparoscopic ultrasound (LUS) is employed to capture the arteries intraoperatively in this approach, and a laparoscopic vision-based tracking system is utilized to track the LUS probe. To extract the artery contours from the two-dimensional ultrasound image sequences efficiently, an artery extraction framework based on local phase-based snakes was developed. After reconstructing the three-dimensional intraoperative artery model from ultrasound images, a high-resolution artery model segmented from preoperative computed tomography (CT) images was rigidly registered to the intraoperative artery model and overlaid onto the laparoscopic view to guide laparoscopic LPLND. RESULTS Experiments were conducted to evaluate the performance of the vision-based tracking system, and the average reconstruction error of the proposed tracking system was found to be 2.4 mm. Then, the proposed navigation system was quantitatively evaluated on an artery phantom. The reconstruction time and average navigation error were 8 min and 2.3 mm, respectively. A navigation system was also successfully constructed to localize the pelvic arteries in laparoscopic and open surgeries of a swine. This demonstrated the feasibility of the proposed system in vivo. The construction times in the laparoscopic and open surgeries were 14 and 12 min, respectively. CONCLUSIONS The experimental results showed that the proposed navigation system can guide laparoscopic LPLND and requires a significantly shorter setting time than the state-of-the-art navigation systems do.
Collapse
Affiliation(s)
- Lei Ma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- School of Mechanical Engineering, Beihang University, Beijing, China
| | | | | | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
7
|
Huang B, Tsai YY, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DS. Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe. Int J Comput Assist Radiol Surg 2020; 15:1389-1397. [PMID: 32556919 PMCID: PMC7351835 DOI: 10.1007/s11548-020-02205-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 05/27/2020] [Indexed: 12/17/2022]
Abstract
Purpose In surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed. Methods A dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons. Results The method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.05^{\circ }$$\end{document}0.05∘ and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.06^{\circ }$$\end{document}0.06∘, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$360^{\circ }, 360^{\circ }$$\end{document}360∘,360∘ and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$8^{\circ }$$\end{document}8∘–\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$82^{\circ }\cup 188^{\circ }$$\end{document}82∘∪188∘–\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$352^{\circ }$$\end{document}352∘ . Conclusion The performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hybrid markers. The augmented reality will be used to provide visual feedback to the surgeons on the location of the affected lymph nodes or tumor.
Collapse
Affiliation(s)
- Baoru Huang
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK.
| | - Ya-Yen Tsai
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | - João Cartucho
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | | | | | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | - Daniel S Elson
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
8
|
Liu X, Plishker W, Kane TD, Geller DA, Lau LW, Tashiro J, Sharma K, Shekhar R. Preclinical evaluation of ultrasound-augmented needle navigation for laparoscopic liver ablation. Int J Comput Assist Radiol Surg 2020; 15:803-810. [PMID: 32323211 DOI: 10.1007/s11548-020-02164-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 04/06/2020] [Indexed: 12/17/2022]
Abstract
PURPOSE For laparoscopic ablation to be successful, accurate placement of the needle to the tumor is essential. Laparoscopic ultrasound is an essential tool to guide needle placement, but the ultrasound image is generally presented separately from the laparoscopic image. We aim to evaluate an augmented reality (AR) system which combines laparoscopic ultrasound image, laparoscope video, and the needle trajectory in a unified view. METHODS We created a tissue phantom made of gelatin. Artificial tumors represented by plastic spheres were secured in the gelatin at various depths. The top point of the sphere surface was our target, and its 3D coordinates were known. The participants were invited to perform needle placement with and without AR guidance. Once the participant reported that the needle tip had reached the target, the needle tip location was recorded and compared to the ground truth location of the target, and the difference was the target localization error (TLE). The time of the needle placement was also recorded. We further tested the technical feasibility of the AR system in vivo on a 40-kg swine. RESULTS The AR guidance system was evaluated by two experienced surgeons and two surgical fellows. The users performed needle placement on a total of 26 targets, 13 with AR and 13 without (i.e., the conventional approach). The average TLE for the conventional and the AR approaches was 14.9 mm and 11.1 mm, respectively. The average needle placement time needed for the conventional and AR approaches was 59.4 s and 22.9 s, respectively. For the animal study, ultrasound image and needle trajectory were successfully fused with the laparoscopic video in real time and presented on a single screen for the surgeons. CONCLUSION By providing projected needle trajectory, we believe our AR system can assist the surgeon with more efficient and precise needle placement.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | | | - Timothy D Kane
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - David A Geller
- Department of Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Lung W Lau
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Jun Tashiro
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Karun Sharma
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA.
- IGI Technologies, Inc., College Park, MD, USA.
| |
Collapse
|
9
|
Computer-assisted surgery: virtual- and augmented-reality displays for navigation during urological interventions. Curr Opin Urol 2019; 28:205-213. [PMID: 29278582 DOI: 10.1097/mou.0000000000000478] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
PURPOSE OF REVIEW To provide an overview of the developments made for virtual- and augmented-reality navigation procedures in urological interventions/surgery. RECENT FINDINGS Navigation efforts have demonstrated potential in the field of urology by supporting guidance for various disorders. The navigation approaches differ between the individual indications, but seem interchangeable to a certain extent. An increasing number of pre- and intra-operative imaging modalities has been used to create detailed surgical roadmaps, namely: (cone-beam) computed tomography, MRI, ultrasound, and single-photon emission computed tomography. Registration of these surgical roadmaps with the real-life surgical view has occurred in different forms (e.g. electromagnetic, mechanical, vision, or near-infrared optical-based), whereby the combination of approaches was suggested to provide superior outcome. Soft-tissue deformations demand the use of confirmatory interventional (imaging) modalities. This has resulted in the introduction of new intraoperative modalities such as drop-in US, transurethral US, (drop-in) gamma probes and fluorescence cameras. These noninvasive modalities provide an alternative to invasive technologies that expose the patients to X-ray doses. Whereas some reports have indicated navigation setups provide equal or better results than conventional approaches, most trials have been performed in relatively small patient groups and clear follow-up data are missing. SUMMARY The reported computer-assisted surgery research concepts provide a glimpse in to the future application of navigation technologies in the field of urology.
Collapse
|
10
|
Joeres F, Schindele D, Luz M, Blaschke S, Russwinkel N, Schostak M, Hansen C. How well do software assistants for minimally invasive partial nephrectomy meet surgeon information needs? A cognitive task analysis and literature review study. PLoS One 2019; 14:e0219920. [PMID: 31318919 PMCID: PMC6638947 DOI: 10.1371/journal.pone.0219920] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Accepted: 07/04/2019] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Intraoperative software assistance is gaining increasing importance in laparoscopic and robot-assisted surgery. Within the user-centred development process of such systems, the first question to be asked is: What information does the surgeon need and when does he or she need it? In this article, we present an approach to investigate these surgeon information needs for minimally invasive partial nephrectomy and compare these needs to the relevant surgical computer assistance literature. MATERIALS AND METHODS First, we conducted a literature-based hierarchical task analysis of the surgical procedure. This task analysis was taken as a basis for a qualitative in-depth interview study with nine experienced surgical urologists. The study employed a cognitive task analysis method to elicit surgeons' information needs during minimally invasive partial nephrectomy. Finally, a systematic literature search was conducted to review proposed software assistance solutions for minimally invasive partial nephrectomy. The review focused on what information the solutions present to the surgeon and what phase of the surgery they aim to support. RESULTS The task analysis yielded a workflow description for minimally invasive partial nephrectomy. During the subsequent interview study, we identified three challenging phases of the procedure, which may particularly benefit from software assistance. These phases are I. Hilar and vascular management, II. Tumour excision, and III. Repair of the renal defects. Between these phases, 25 individual challenges were found which define the surgeon information needs. The literature review identified 34 relevant publications, all of which aim to support the surgeon in hilar and vascular management (phase I) or tumour excision (phase II). CONCLUSION The work presented in this article identified unmet surgeon information needs in minimally invasive partial nephrectomy. Namely, our results suggest that future solutions should address the repair of renal defects (phase III) or put more focus on the renal collecting system as a critical anatomical structure.
Collapse
Affiliation(s)
- Fabian Joeres
- Department of Simulation and Graphics, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Daniel Schindele
- Clinic of Urology and Paediatric Urology, University Hospital of Magdeburg, Magdeburg, Germany
| | - Maria Luz
- Department of Simulation and Graphics, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Simon Blaschke
- Clinic of Urology and Paediatric Urology, University Hospital of Magdeburg, Magdeburg, Germany
| | - Nele Russwinkel
- Department of Cognitive Modelling in Dynamic Human-Machine Systems, Technische Universität Berlin, Berlin, Germany
| | - Martin Schostak
- Clinic of Urology and Paediatric Urology, University Hospital of Magdeburg, Magdeburg, Germany
| | - Christian Hansen
- Department of Simulation and Graphics, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| |
Collapse
|
11
|
Intraoperative Imaging Techniques to Support Complete Tumor Resection in Partial Nephrectomy. Eur Urol Focus 2018; 4:960-968. [DOI: 10.1016/j.euf.2017.04.008] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Accepted: 04/29/2017] [Indexed: 12/22/2022]
|
12
|
Camara M, Mayer E, Darzi A, Pratt P. Intraoperative ultrasound for improved 3D tumour reconstruction in robot-assisted surgery: An evaluation of feedback modalities. Int J Med Robot 2018; 15:e1973. [PMID: 30485641 DOI: 10.1002/rcs.1973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 11/16/2018] [Accepted: 11/16/2018] [Indexed: 11/10/2022]
Abstract
BACKGROUND Intraoperative ultrasound scanning induces deformation on the tissue in the absence of a feedback modality, which results in a 3D tumour reconstruction that is not directly representative of real anatomy. METHODS A biomechanical model with different feedback modalities (haptic, visual, or auditory) was implemented in a simulation environment. A user study with 20 clinicians was performed to assess which modality resulted in the 3D tumour volume reconstruction that most resembled the reference configuration from the respective computed tomography (CT) scans. RESULTS Integrating a feedback modality significantly improved the scanning performance across all participants and data sets. The optimal feedback modality to adopt varied depending on the evaluation. Nonetheless, using guidance with feedback is always preferred compared with none. CONCLUSIONS The results demonstrated the urgency to integrate a feedback modality framework into clinical practice, to ensure an improved scanning performance. Furthermore, this framework enabled an evaluation that cannot be performed in vivo.
Collapse
Affiliation(s)
- Mafalda Camara
- Department of Surgery and Cancer, Imperial College London, United Kingdom
| | - Erik Mayer
- Department of Surgery and Cancer, Imperial College London, United Kingdom
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College London, United Kingdom
| | - Philip Pratt
- Department of Surgery and Cancer, Imperial College London, United Kingdom
| |
Collapse
|
13
|
Edgcumbe P, Singla R, Pratt P, Schneider C, Nguan C, Rohling R. Follow the light: projector-based augmented reality intracorporeal system for laparoscopic surgery. J Med Imaging (Bellingham) 2018; 5:021216. [PMID: 29487888 DOI: 10.1117/1.jmi.5.2.021216] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 01/22/2018] [Indexed: 01/20/2023] Open
Abstract
A projector-based augmented reality intracorporeal system (PARIS) is presented that includes a miniature tracked projector, tracked marker, and laparoscopic ultrasound (LUS) transducer. PARIS was developed to improve the efficacy and safety of laparoscopic partial nephrectomy (LPN). In particular, it has been demonstrated to effectively assist in the identification of tumor boundaries during surgery and to improve the surgeon's understanding of the underlying anatomy. PARIS achieves this by displaying the orthographic projection of the cancerous tumor on the kidney's surface. The performance of PARIS was evaluated in a user study with two surgeons who performed 32 simulated robot-assisted partial nephrectomies. They performed 16 simulated partial nephrectomies with PARIS for guidance and 16 simulated partial nephrectomies with only an LUS transducer for guidance. With PARIS, there was a significant reduction [30% ([Formula: see text])] in the amount of healthy tissue excised and a trend toward a more accurate dissection around the tumor and more negative margins. The combined point tracking and reprojection root-mean-square error of PARIS was 0.8 mm. PARIS' proven ability to improve key metrics of LPN surgery and qualitative feedback from surgeons about PARIS supports the hypothesis that it is an effective surgical navigation tool.
Collapse
Affiliation(s)
- Philip Edgcumbe
- University of British Columbia, MD/PhD Program, Vancouver, Canada
| | - Rohit Singla
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Philip Pratt
- Imperial College London, Department of Surgery and Cancer, London, United Kingdom
| | - Caitlin Schneider
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Christopher Nguan
- University of British Columbia, Department of Urological Sciences, Vancouver, Canada
| | - Robert Rohling
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada.,University of British Columbia, Department of Mechanical Engineering, Vancouver, Canada
| |
Collapse
|
14
|
Singla R, Edgcumbe P, Pratt P, Nguan C, Rohling R. Intra-operative ultrasound-based augmented reality guidance for laparoscopic surgery. Healthc Technol Lett 2017; 4:204-209. [PMID: 29184666 PMCID: PMC5683195 DOI: 10.1049/htl.2017.0063] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 07/28/2017] [Indexed: 01/20/2023] Open
Abstract
In laparoscopic surgery, the surgeon must operate with a limited field of view and reduced depth perception. This makes spatial understanding of critical structures difficult, such as an endophytic tumour in a partial nephrectomy. Such tumours yield a high complication rate of 47%, and excising them increases the risk of cutting into the kidney's collecting system. To overcome these challenges, an augmented reality guidance system is proposed. Using intra-operative ultrasound, a single navigation aid, and surgical instrument tracking, four augmentations of guidance information are provided during tumour excision. Qualitative and quantitative system benefits are measured in simulated robot-assisted partial nephrectomies. Robot-to-camera calibration achieved a total registration error of 1.0 ± 0.4 mm while the total system error is 2.5 ± 0.5 mm. The system significantly reduced healthy tissue excised from an average (±standard deviation) of 30.6 ± 5.5 to 17.5 ± 2.4 cm3 (p < 0.05) and reduced the depth from the tumor underside to cut from an average (±standard deviation) of 10.2 ± 4.1 to 3.3 ± 2.3 mm (p < 0.05). Further evaluation is required in vivo, but the system has promising potential to reduce the amount of healthy parenchymal tissue excised.
Collapse
Affiliation(s)
- Rohit Singla
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, CanadaV6T1Z4
| | - Philip Edgcumbe
- MD/PhD Program, University of British Columbia, Vancouver, CanadaV6T1Z4
| | - Philip Pratt
- Department of Surgery and Cancer, Imperial College London, UK, SW72BX
| | - Christopher Nguan
- Department of Urological Sciences, University of British Columbia, Vancouver, CanadaV6T1Z4
| | - Robert Rohling
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, CanadaV6T1Z4.,Department of Mechanical Engineering, University of British Columbia, Vancouver, CanadaV6T1Z4
| |
Collapse
|
15
|
Detmer FJ, Hettig J, Schindele D, Schostak M, Hansen C. Virtual and Augmented Reality Systems for Renal Interventions: A Systematic Review. IEEE Rev Biomed Eng 2017; 10:78-94. [PMID: 28885161 DOI: 10.1109/rbme.2017.2749527] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
PURPOSE Many virtual and augmented reality systems have been proposed to support renal interventions. This paper reviews such systems employed in the treatment of renal cell carcinoma and renal stones. METHODS A systematic literature search was performed. Inclusion criteria were virtual and augmented reality systems for radical or partial nephrectomy and renal stone treatment, excluding systems solely developed or evaluated for training purposes. RESULTS In total, 52 research papers were identified and analyzed. Most of the identified literature (87%) deals with systems for renal cell carcinoma treatment. About 44% of the systems have already been employed in clinical practice, but only 20% in studies with ten or more patients. Main challenges remaining for future research include the consideration of organ movement and deformation, human factor issues, and the conduction of large clinical studies. CONCLUSION Augmented and virtual reality systems have the potential to improve safety and outcomes of renal interventions. In the last ten years, many technical advances have led to more sophisticated systems, which are already applied in clinical practice. Further research is required to cope with current limitations of virtual and augmented reality assistance in clinical environments.
Collapse
|
16
|
Real-time surgical tool tracking and pose estimation using a hybrid cylindrical marker. Int J Comput Assist Radiol Surg 2017; 12:921-930. [PMID: 28342105 PMCID: PMC5447333 DOI: 10.1007/s11548-017-1558-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Accepted: 03/08/2017] [Indexed: 11/01/2022]
Abstract
PURPOSE To provide an integrated visualisation of intraoperative ultrasound and endoscopic images to facilitate intraoperative guidance, real-time tracking of the ultrasound probe is required. State-of-the-art methods are suitable for planar targets while most of the laparoscopic ultrasound probes are cylindrical objects. A tracking framework for cylindrical objects with a large work space will improve the usability of the intraoperative ultrasound guidance. METHODS A hybrid marker design that combines circular dots and chessboard vertices is proposed for facilitating tracking cylindrical tools. The circular dots placed over the curved surface are used for pose estimation. The chessboard vertices are employed to provide additional information for resolving the ambiguous pose problem due to the use of planar model points under a monocular camera. Furthermore, temporal information between consecutive images is considered to minimise tracking failures with real-time computational performance. RESULTS Detailed validation confirms that our hybrid marker provides a large working space for different tool sizes (6-14 mm in diameter). The tracking framework allows translational movements between 40 and 185 mm along the depth direction and rotational motion around three local orthogonal axes up to [Formula: see text]. Comparative studies with the current state of the art confirm that our approach outperforms existing methods by providing nearly 100% detection rates and accurate pose estimation with mean errors of 2.8 mm and 0.72[Formula: see text]. The tracking algorithm runs at 20 frames per second for [Formula: see text] image resolution videos. CONCLUSION Experiments show that the proposed hybrid marker can be applied to a wide range of surgical tools with superior detection rates and pose estimation accuracies. Both the qualitative and quantitative results demonstrate that our framework can be used not only for assisting intraoperative ultrasound guidance but also for tracking general surgical tools in MIS.
Collapse
|
17
|
Liu X, Kang S, Plishker W, Zaki G, Kane TD, Shekhar R. Laparoscopic stereoscopic augmented reality: toward a clinically viable electromagnetic tracking solution. J Med Imaging (Bellingham) 2016; 3:045001. [PMID: 27752522 DOI: 10.1117/1.jmi.3.4.045001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Accepted: 09/08/2016] [Indexed: 11/14/2022] Open
Abstract
The purpose of this work was to develop a clinically viable laparoscopic augmented reality (AR) system employing stereoscopic (3-D) vision, laparoscopic ultrasound (LUS), and electromagnetic (EM) tracking to achieve image registration. We investigated clinically feasible solutions to mount the EM sensors on the 3-D laparoscope and the LUS probe. This led to a solution of integrating an externally attached EM sensor near the imaging tip of the LUS probe, only slightly increasing the overall diameter of the probe. Likewise, a solution for mounting an EM sensor on the handle of the 3-D laparoscope was proposed. The spatial image-to-video registration accuracy of the AR system was measured to be [Formula: see text] and [Formula: see text] for the left- and right-eye channels, respectively. The AR system contributed 58-ms latency to stereoscopic visualization. We further performed an animal experiment to demonstrate the use of the system as a visualization approach for laparoscopic procedures. In conclusion, we have developed an integrated, compact, and EM tracking-based stereoscopic AR visualization system, which has the potential for clinical use. The system has been demonstrated to achieve clinically acceptable accuracy and latency. This work is a critical step toward clinical translation of AR visualization for laparoscopic procedures.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation , Children's National Health System, 111 Michigan Avenue NW, Washington, DC 20010, United States
| | - Sukryool Kang
- Sheikh Zayed Institute for Pediatric Surgical Innovation , Children's National Health System, 111 Michigan Avenue NW, Washington, DC 20010, United States
| | - William Plishker
- IGI Technologies, Inc. , 387 Technology Drive #3110D, College Park, Maryland 20742, United States
| | - George Zaki
- IGI Technologies, Inc. , 387 Technology Drive #3110D, College Park, Maryland 20742, United States
| | - Timothy D Kane
- Sheikh Zayed Institute for Pediatric Surgical Innovation , Children's National Health System, 111 Michigan Avenue NW, Washington, DC 20010, United States
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue NW, Washington, DC 20010, United States; IGI Technologies, Inc., 387 Technology Drive #3110D, College Park, Maryland 20742, United States
| |
Collapse
|
18
|
Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-43775-0_13] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|