1
|
Richter A, Steinmann T, Rosenthal JC, Rupitsch SJ. Advances in Real-Time 3D Reconstruction for Medical Endoscopy. J Imaging 2024; 10:120. [PMID: 38786574 PMCID: PMC11122342 DOI: 10.3390/jimaging10050120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 04/23/2024] [Accepted: 04/24/2024] [Indexed: 05/25/2024] Open
Abstract
This contribution is intended to provide researchers with a comprehensive overview of the current state-of-the-art concerning real-time 3D reconstruction methods suitable for medical endoscopy. Over the past decade, there have been various technological advancements in computational power and an increased research effort in many computer vision fields such as autonomous driving, robotics, and unmanned aerial vehicles. Some of these advancements can also be adapted to the field of medical endoscopy while coping with challenges such as featureless surfaces, varying lighting conditions, and deformable structures. To provide a comprehensive overview, a logical division of monocular, binocular, trinocular, and multiocular methods is performed and also active and passive methods are distinguished. Within these categories, we consider both flexible and non-flexible endoscopes to cover the state-of-the-art as fully as possible. The relevant error metrics to compare the publications presented here are discussed, and the choice of when to choose a GPU rather than an FPGA for camera-based 3D reconstruction is debated. We elaborate on the good practice of using datasets and provide a direct comparison of the presented work. It is important to note that in addition to medical publications, publications evaluated on the KITTI and Middlebury datasets are also considered to include related methods that may be suited for medical 3D reconstruction.
Collapse
Affiliation(s)
- Alexander Richter
- Fraunhofer Institute for High-Speed Dynamics, Ernst–Mach–Institut (EMI), Ernst-Zermelo-Straße 4, 79104 Freiburg, Germany
- Electrical Instrumentation and Embedded Systems, Albert–Ludwigs–Universität Freiburg, Goerges-Köhler-Allee 106, 79110 Freiburg, Germany; (T.S.); (S.J.R.)
| | - Till Steinmann
- Electrical Instrumentation and Embedded Systems, Albert–Ludwigs–Universität Freiburg, Goerges-Köhler-Allee 106, 79110 Freiburg, Germany; (T.S.); (S.J.R.)
| | - Jean-Claude Rosenthal
- Fraunhofer Institute for Telecommunications, Heinrich–Hertz–Institut (HHI), Einsteinufer 37, 10587 Berlin, Germany
| | - Stefan J. Rupitsch
- Electrical Instrumentation and Embedded Systems, Albert–Ludwigs–Universität Freiburg, Goerges-Köhler-Allee 106, 79110 Freiburg, Germany; (T.S.); (S.J.R.)
| |
Collapse
|
2
|
Zhang X, Ji X, Wang J, Fan Y, Tao C. Renal surface reconstruction and segmentation for image-guided surgical navigation of laparoscopic partial nephrectomy. Biomed Eng Lett 2023; 13:165-174. [PMID: 37124114 PMCID: PMC10130295 DOI: 10.1007/s13534-023-00263-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 12/01/2022] [Accepted: 01/22/2023] [Indexed: 02/04/2023] Open
Abstract
An unpredictable dynamic surgical environment makes it necessary to measure morphological information of target tissue real-time for laparoscopic image-guided navigation. The stereo vision method for intraoperative tissue 3D reconstruction has the most potential for clinical development benefiting from its high reconstruction accuracy and laparoscopy compatibility. However, existing stereo vision methods have difficulty in achieving high reconstruction accuracy in real time. Also, intraoperative tissue reconstruction results often contain complex background and instrument information that prevents clinical development for image-guided systems. Taking laparoscopic partial nephrectomy (LPN) as the research object, this paper realizes a real-time dense reconstruction and extraction of the kidney tissue surface. The central symmetrical Census based semi-global block stereo matching algorithm is proposed to generate a dense disparity map. A GPU-based pixel-by-pixel connectivity segmentation mechanism is designed to segment the renal tissue area. An in-vitro porcine heart, in-vivo porcine kidney and offline clinical LPN data were performed to evaluate the accuracy and effectiveness of our approach. The algorithm achieved a reconstruction accuracy of ± 2 mm with a real-time update rate of 21 fps for an HD image size of 960 × 540, and 91.0% target tissue segmentation accuracy even with surgical instrument occlusions. Experimental results have demonstrated that the proposed method could accurately reconstruct and extract renal surface in real-time in LPN. The measurement results can be used directly for image-guided systems. Our method provides a new way to measure geometric information of target tissue intraoperatively in laparoscopy surgery. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-023-00263-1.
Collapse
Affiliation(s)
- Xiaohui Zhang
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Xuquan Ji
- School of Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang Unviersity, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Yubo Fan
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
- School of Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Chunjing Tao
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| |
Collapse
|
3
|
Dowrick T, Xiao G, Nikitichev D, Dursun E, van Berkel N, Allam M, Koo B, Ramalhinho J, Thompson S, Gurusamy K, Blandford A, Stoyanov D, Davidson BR, Clarkson MJ. Evaluation of a calibration rig for stereo laparoscopes. Med Phys 2023; 50:2695-2704. [PMID: 36779419 PMCID: PMC10614700 DOI: 10.1002/mp.16310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 02/01/2023] [Accepted: 02/01/2023] [Indexed: 02/14/2023] Open
Abstract
BACKGROUND Accurate camera and hand-eye calibration are essential to ensure high-quality results in image-guided surgery applications. The process must also be able to be undertaken by a nonexpert user in a surgical setting. PURPOSE This work seeks to identify a suitable method for tracked stereo laparoscope calibration within theater. METHODS A custom calibration rig, to enable rapid calibration in a surgical setting, was designed. The rig was compared against freehand calibration. Stereo reprojection, stereo reconstruction, tracked stereo reprojection, and tracked stereo reconstruction error metrics were used to evaluate calibration quality. RESULTS Use of the calibration rig reduced mean errors: reprojection (1.47 mm [SD 0.13] vs. 3.14 mm [SD 2.11], p-value 1e-8), reconstruction (1.37 px [SD 0.10] vs. 10.10 px [SD 4.54], p-value 6e-7), and tracked reconstruction (1.38 mm [SD 0.10] vs. 12.64 mm [SD 4.34], p-value 1e-6) compared with freehand calibration. The use of a ChArUco pattern yielded slightly lower reprojection errors, while a dot grid produced lower reconstruction errors and was more robust under strong global illumination. CONCLUSION The use of the calibration rig results in a statistically significant decrease in calibration error metrics, versus freehand calibration, and represents the preferred approach for use in the operating theater.
Collapse
Affiliation(s)
- Thomas Dowrick
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Guofang Xiao
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Daniil Nikitichev
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Eren Dursun
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Niels van Berkel
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Moustafa Allam
- Royal Free CampusUCL Medical SchoolRoyal Free HospitalLondonUK
| | - Bongjin Koo
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Joao Ramalhinho
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Stephen Thompson
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | | | - Ann Blandford
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Danail Stoyanov
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | | | | |
Collapse
|
4
|
Using multiple images and contours for deformable 3D-2D registration of a preoperative CT in laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2022; 17:2211-2219. [PMID: 36253604 DOI: 10.1007/s11548-022-02774-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 10/05/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Laparoscopic liver resection is a challenging procedure because of the difficulty to localise inner structures such as tumours and vessels. Augmented reality overcomes this problem by overlaying preoperative 3D models on the laparoscopic views. It requires deformable registration of the preoperative 3D models to the laparoscopic views, which is a challenging task due to the liver flexibility and partial visibility. METHODS We propose several multi-view registration methods exploiting information from multiple views simultaneously in order to improve registration accuracy. They are designed to work on two scenarios: on rigidly related views and on non-rigidly related views. These methods exploit the liver's anatomical landmarks and texture information available in all the views to constrain registration. RESULTS We evaluated the registration accuracy of our methods quantitatively on synthetic and phantom data, and qualitatively on patient data. We measured 3D target registration errors in mm for the whole liver for the quantitative case, and 2D reprojection errors in pixels for the qualitative case. CONCLUSION The proposed rigidly related multi-view methods improve registration accuracy compared to the baseline single-view method. They comply with the 1 cm oncologic resection margin advised for hepatocellular carcinoma interventions, depending on the available registration constraints. The non-rigidly related multi-view method does not provide a noticeable improvement. This means that using multiple views with the rigidity assumption achieves the best overall registration error.
Collapse
|
5
|
He B, Yin D, Chen X, Luo H, Xiao D, He M, Wang G, Fang C, Liu L, Jia F. A study of generalization and compatibility performance of 3D U-Net segmentation on multiple heterogeneous liver CT datasets. BMC Med Imaging 2021; 21:178. [PMID: 34819022 PMCID: PMC8611902 DOI: 10.1186/s12880-021-00708-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Most existing algorithms have been focused on the segmentation from several public Liver CT datasets scanned regularly (no pneumoperitoneum and horizontal supine position). This study primarily segmented datasets with unconventional liver shapes and intensities deduced by contrast phases, irregular scanning conditions, different scanning objects of pigs and patients with large pathological tumors, which formed the multiple heterogeneity of datasets used in this study. METHODS The multiple heterogeneous datasets used in this paper includes: (1) One public contrast-enhanced CT dataset and one public non-contrast CT dataset; (2) A contrast-enhanced dataset that has abnormal liver shape with very long left liver lobes and large-sized liver tumors with abnormal presets deduced by microvascular invasion; (3) One artificial pneumoperitoneum dataset under the pneumoperitoneum and three scanning profiles (horizontal/left/right recumbent position); (4) Two porcine datasets of Bama type and domestic type that contains pneumoperitoneum cases but with large anatomy discrepancy with humans. The study aimed to investigate the segmentation performances of 3D U-Net in: (1) generalization ability between multiple heterogeneous datasets by cross-testing experiments; (2) the compatibility when hybrid training all datasets in different sampling and encoder layer sharing schema. We further investigated the compatibility of encoder level by setting separate level for each dataset (i.e., dataset-wise convolutions) while sharing the decoder. RESULTS Model trained on different datasets has different segmentation performance. The prediction accuracy between LiTS dataset and Zhujiang dataset was about 0.955 and 0.958 which shows their good generalization ability due to that they were all contrast-enhanced clinical patient datasets scanned regularly. For the datasets scanned under pneumoperitoneum, their corresponding datasets scanned without pneumoperitoneum showed good generalization ability. Dataset-wise convolution module in high-level can improve the dataset unbalance problem. The experimental results will facilitate researchers making solutions when segmenting those special datasets. CONCLUSIONS (1) Regularly scanned datasets is well generalized to irregularly ones. (2) The hybrid training is beneficial but the dataset imbalance problem always exits due to the multi-domain homogeneity. The higher levels encoded more domain specific information than lower levels and thus were less compatible in terms of our datasets.
Collapse
Affiliation(s)
- Baochun He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Dalong Yin
- Department of Hepatobiliary Surgery, The First Affiliated Hospital, Harbin Medical University, Harbin, China
- Department of Hepatobiliary Surgery, The First Affiliated Hospital, University of Science and Technology of China, Hefei, China
| | - Xiaoxia Chen
- Department of Radiology, The Third Medical Center, General Hospital of PLA, Beijing, China
| | - Huoling Luo
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Deqiang Xiao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Mu He
- First Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Guisheng Wang
- Department of Radiology, The Third Medical Center, General Hospital of PLA, Beijing, China
| | - Chihua Fang
- First Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Lianxin Liu
- Department of Hepatobiliary Surgery, The First Affiliated Hospital, Harbin Medical University, Harbin, China.
- Department of Hepatobiliary Surgery, The First Affiliated Hospital, University of Science and Technology of China, Hefei, China.
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China.
- Pazhou Lab, Guangzhou, China.
| |
Collapse
|
6
|
Wang J, Shen Y, Yang S. A practical marker-less image registration method for augmented reality oral and maxillofacial surgery. Int J Comput Assist Radiol Surg 2019; 14:763-773. [PMID: 30825070 DOI: 10.1007/s11548-019-01921-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 01/29/2019] [Indexed: 12/12/2022]
Abstract
BACKGROUND Image registration lies in the core of augmented reality (AR), which aligns the virtual scene with the reality. In AR surgical navigation, the performance of image registration is vital to the surgical outcome. METHODS This paper presents a practical marker-less image registration method for AR-guided oral and maxillofacial surgery where a virtual scene is generated and mixed with reality to guide surgical operation or provide surgical outcome visualization in the manner of video see-through overlay. An intraoral 3D scanner is employed to acquire the patient's teeth shape model intraoperatively. The shape model is then registered with a custom-made stereo camera system using a novel 3D stereo matching algorithm and with the patient's CT-derived 3D model using an iterative closest point scheme, respectively. By leveraging the intraoral 3D scanner, the CT space and the stereo camera space are associated so that surrounding anatomical models and virtual implants could be overlaid on the camera's view to achieve AR surgical navigation. RESULTS Jaw phantom experiments were performed to evaluate the target registration error of the overlay, which yielded an average error of less than 0.50 mm with the time cost less than 0.5 s. Volunteer trial was also conducted to show the clinical feasibility. CONCLUSIONS The proposed registration method does not rely on any external fiducial markers attached to the patient. It performs automatically so as to maintain a correct AR scene, overcoming the misalignment difficulty caused by patient's movement. Therefore, it is noninvasive and practical in oral and maxillofacial surgery.
Collapse
Affiliation(s)
- Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, 100191, China.,Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100086, China
| | - Yu Shen
- School of Mechanical Engineering and Automation, Beihang University, Beijing, 100191, China.,Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100086, China
| | - Shuo Yang
- Stomatological Hospital, Southern Medical University, Guangzhou, China.
| |
Collapse
|
7
|
Chalasani P, Wang L, Yasin R, Simaan N, Taylor RH. Preliminary Evaluation of an Online Estimation Method for Organ Geometry and Tissue Stiffness. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2801481] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8
|
In vivo estimation of target registration errors during augmented reality laparoscopic surgery. Int J Comput Assist Radiol Surg 2018; 13:865-874. [PMID: 29663273 PMCID: PMC5973973 DOI: 10.1007/s11548-018-1761-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 04/02/2018] [Indexed: 11/02/2022]
Abstract
PURPOSE Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. METHODS The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. RESULTS The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. CONCLUSION We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
Collapse
|
9
|
Robu MR, Edwards P, Ramalhinho J, Thompson S, Davidson B, Hawkes D, Stoyanov D, Clarkson MJ. Intelligent viewpoint selection for efficient CT to video registration in laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2017; 12:1079-1088. [PMID: 28401399 PMCID: PMC5509843 DOI: 10.1007/s11548-017-1584-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Accepted: 03/30/2017] [Indexed: 01/27/2023]
Abstract
PURPOSE Minimally invasive surgery offers advantages over open surgery due to a shorter recovery time, less pain and trauma for the patient. However, inherent challenges such as lack of tactile feedback and difficulty in controlling bleeding lower the percentage of suitable cases. Augmented reality can show a better visualisation of sub-surface structures and tumour locations by fusing pre-operative CT data with real-time laparoscopic video. Such augmented reality visualisation requires a fast and robust video to CT registration that minimises interruption to the surgical procedure. METHODS We propose to use view planning for efficient rigid registration. Given the trocar position, a set of camera positions are sampled and scored based on the corresponding liver surface properties. We implement a simulation framework to validate the proof of concept using a segmented CT model from a human patient. Furthermore, we apply the proposed method on clinical data acquired during a human liver resection. RESULTS The first experiment motivates the viewpoint scoring strategy and investigates reliable liver regions for accurate registrations in an intuitive visualisation. The second experiment shows wider basins of convergence for higher scoring viewpoints. The third experiment shows that a comparable registration performance can be achieved by at least two merged high scoring views and four low scoring views. Hence, the focus could change from the acquisition of a large liver surface to a small number of distinctive patches, thereby giving a more explicit protocol for surface reconstruction. We discuss the application of the proposed method on clinical data and show initial results. CONCLUSION The proposed simulation framework shows promising results to motivate more research into a comprehensive view planning method for efficient registration in laparoscopic liver surgery.
Collapse
Affiliation(s)
- Maria R Robu
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| | - Philip Edwards
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - João Ramalhinho
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Stephen Thompson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Brian Davidson
- Royal Free Campus, UCL Medical School, 9th Floor, Royal Free Hospital, Rowland Hill Street, London, UK
| | - David Hawkes
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Danail Stoyanov
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Matthew J Clarkson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| |
Collapse
|
10
|
Song Y, Totz J, Thompson S, Johnsen S, Barratt D, Schneider C, Gurusamy K, Davidson B, Ourselin S, Hawkes D, Clarkson MJ. Locally rigid, vessel-based registration for laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2015; 10:1951-61. [PMID: 26092658 PMCID: PMC4642598 DOI: 10.1007/s11548-015-1236-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Accepted: 05/30/2015] [Indexed: 12/05/2022]
Abstract
PURPOSE Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet is difficult for most lesions due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but is challenging in a soft deforming organ such as the liver. In this paper, we therefore propose a laparoscopic ultrasound (LUS) image guidance system and study the feasibility of a locally rigid registration for laparoscopic liver surgery. METHODS We developed a real-time segmentation method to extract vessel centre points from calibrated, freehand, electromagnetically tracked, 2D LUS images. Using landmark-based initial registration and an optional iterative closest point (ICP) point-to-line registration, a vessel centre-line model extracted from preoperative computed tomography (CT) is registered to the ultrasound data during surgery. RESULTS Using the locally rigid ICP method, the RMS residual error when registering to a phantom was 0.7 mm, and the mean target registration error (TRE) for two in vivo porcine studies was 3.58 and 2.99 mm, respectively. Using the locally rigid landmark-based registration method gave a mean TRE of 4.23 mm using vessel centre lines derived from CT scans taken with pneumoperitoneum and 6.57 mm without pneumoperitoneum. CONCLUSION In this paper we propose a practical image-guided surgery system based on locally rigid registration of a CT-derived model to vascular structures located with LUS. In a physical phantom and during porcine laparoscopic liver resection, we demonstrate accuracy of target location commensurate with surgical requirements. We conclude that locally rigid registration could be sufficient for practically useful image guidance in the near future.
Collapse
Affiliation(s)
- Yi Song
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| | - Johannes Totz
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Steve Thompson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Stian Johnsen
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Dean Barratt
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Crispin Schneider
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Kurinchi Gurusamy
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Brian Davidson
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Sébastien Ourselin
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - David Hawkes
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Matthew J Clarkson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| |
Collapse
|
11
|
Lin B, Sun Y, Qian X, Goldgof D, Gitlin R, You Y. Video‐based 3D reconstruction, laparoscope localization and deformation recovery for abdominal minimally invasive surgery: a survey. Int J Med Robot 2015; 12:158-78. [DOI: 10.1002/rcs.1661] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2015] [Indexed: 11/07/2022]
Affiliation(s)
- Bingxiong Lin
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Yu Sun
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Xiaoning Qian
- Department of Electrical and Computer Engineering Texas A&M University College Station TX USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Richard Gitlin
- Department of Electrical Engineering University of South Florida Tampa FL USA
| | - Yuncheng You
- Department of Mathematics and Statistics University of South Florida Tampa FL USA
| |
Collapse
|
12
|
Johnsen SF, Thompson S, Clarkson MJ, Modat M, Song Y, Totz J, Gurusamy K, Davidson B, Taylor ZA, Hawkes DJ, Ourselin S. Database-Based Estimation of Liver Deformation under Pneumoperitoneum for Surgical Image-Guidance and Simulation. LECTURE NOTES IN COMPUTER SCIENCE 2015. [DOI: 10.1007/978-3-319-24571-3_54] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
13
|
Clarkson MJ, Zombori G, Thompson S, Totz J, Song Y, Espak M, Johnsen S, Hawkes D, Ourselin S. The NifTK software platform for image-guided interventions: platform overview and NiftyLink messaging. Int J Comput Assist Radiol Surg 2014; 10:301-16. [PMID: 25408304 PMCID: PMC4338364 DOI: 10.1007/s11548-014-1124-7] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2014] [Accepted: 10/17/2014] [Indexed: 11/24/2022]
Abstract
PURPOSE To perform research in image-guided interventions, researchers need a wide variety of software components, and assembling these components into a flexible and reliable system can be a challenging task. In this paper, the NifTK software platform is presented. A key focus has been high-performance streaming of stereo laparoscopic video data, ultrasound data and tracking data simultaneously. METHODS A new messaging library called NiftyLink is introduced that uses the OpenIGTLink protocol and provides the user with easy-to-use asynchronous two-way messaging, high reliability and comprehensive error reporting. A small suite of applications called NiftyGuide has been developed, containing lightweight applications for grabbing data, currently from position trackers and ultrasound scanners. These applications use NiftyLink to stream data into NiftyIGI, which is a workstation-based application, built on top of MITK, for visualisation and user interaction. Design decisions, performance characteristics and initial applications are described in detail. NiftyLink was tested for latency when transmitting images, tracking data, and interleaved imaging and tracking data. RESULTS NiftyLink can transmit tracking data at 1,024 frames per second (fps) with latency of 0.31 milliseconds, and 512 KB images with latency of 6.06 milliseconds at 32 fps. NiftyIGI was tested, receiving stereo high-definition laparoscopic video at 30 fps, tracking data from 4 rigid bodies at 20-30 fps and ultrasound data at 20 fps with rendering refresh rates between 2 and 20 Hz with no loss of user interaction. CONCLUSION These packages form part of the NifTK platform and have proven to be successful in a variety of image-guided surgery projects. Code and documentation for the NifTK platform are available from http://www.niftk.org . NiftyLink is provided open-source under a BSD license and available from http://github.com/NifTK/NiftyLink . The code for this paper is tagged IJCARS-2014.
Collapse
Affiliation(s)
- Matthew J Clarkson
- Centre For Medical Image Computing, University College London, Engineering Front Building, Malet Place, London, UK,
| | | | | | | | | | | | | | | | | |
Collapse
|