1
|
Workflow and simulation of image-to-physical registration of holes inside spongy bone. Int J Comput Assist Radiol Surg 2017; 12:1425-1437. [DOI: 10.1007/s11548-017-1594-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2016] [Accepted: 04/24/2017] [Indexed: 10/19/2022]
|
2
|
Armin MA, Chetty G, De Visser H, Dumas C, Grimpen F, Salvado O. Automated visibility map of the internal colon surface from colonoscopy video. Int J Comput Assist Radiol Surg 2016; 11:1599-610. [PMID: 27492067 DOI: 10.1007/s11548-016-1462-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Accepted: 07/19/2016] [Indexed: 01/14/2023]
Abstract
PURPOSE Optical colonoscopy is a prominent procedure by which clinicians examine the surface of the colon for cancerous polyps using a flexible colonoscope. One of the main concerns regarding the quality of the colonoscopy is to ensure that the whole colonic surface has been inspected for abnormalities. In this paper, we aim at estimating areas that have not been covered thoroughly by providing a map from the internal colon surface. METHODS Camera parameters were estimated using optical flow between consecutive colonoscopy frames. A cylinder model was fitted to the colon structure using 3D pseudo stereo vision and projected into each frame. A circumferential band from the cylinder was extracted to unroll the internal colon surface (band image). By registering these band images, drift in estimating camera motion could be reduced, and a visibility map of the colon surface could be generated, revealing uncovered areas by the colonoscope. Hidden areas behind haustral folds were ignored in this study. The method was validated on simulated and actual colonoscopy videos. The realistic simulated videos were generated using a colonoscopy simulator with known ground truth, and the actual colonoscopy videos were manually assessed by a clinical expert. RESULTS The proposed method obtained a sensitivity and precision of 98 and 96 % for detecting the number of uncovered areas on simulated data, whereas validation on real videos showed a sensitivity and precision of 96 and 78 %, respectively. Error in camera motion drift could be reduced by almost 50 % using results from band image registration. CONCLUSION Using a simple cylindrical model for the colon and reducing drift by registering band images allows for the generation of visibility maps. The current results also suggest that the provided feedback through the visibility map could enhance clinicians' awareness of uncovered areas, which in return could reduce the probability of missing polyps.
Collapse
Affiliation(s)
- Mohammad Ali Armin
- HCT, University of Canberra, Bruce, Canberra, ACT, Australia.
- CSIRO Biomedical Informatics, The Australian e-Health Research Centre, Level 5, UQ Health Sciences, Brisbane, QLD, 4029, Australia.
| | - Girija Chetty
- HCT, University of Canberra, Bruce, Canberra, ACT, Australia
| | - Hans De Visser
- CSIRO Biomedical Informatics, The Australian e-Health Research Centre, Level 5, UQ Health Sciences, Brisbane, QLD, 4029, Australia
| | - Cedric Dumas
- CSIRO Biomedical Informatics, The Australian e-Health Research Centre, Level 5, UQ Health Sciences, Brisbane, QLD, 4029, Australia
| | - Florian Grimpen
- Department of Gastroenterology and Hepatology, Royal Brisbane and Women's Hospital, Herston, Brisbane, QLD, Australia
| | - Olivier Salvado
- CSIRO Biomedical Informatics, The Australian e-Health Research Centre, Level 5, UQ Health Sciences, Brisbane, QLD, 4029, Australia.
| |
Collapse
|
3
|
Lin B, Sun Y, Qian X, Goldgof D, Gitlin R, You Y. Video‐based 3D reconstruction, laparoscope localization and deformation recovery for abdominal minimally invasive surgery: a survey. Int J Med Robot 2015; 12:158-78. [DOI: 10.1002/rcs.1661] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2015] [Indexed: 11/07/2022]
Affiliation(s)
- Bingxiong Lin
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Yu Sun
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Xiaoning Qian
- Department of Electrical and Computer Engineering Texas A&M University College Station TX USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Richard Gitlin
- Department of Electrical Engineering University of South Florida Tampa FL USA
| | - Yuncheng You
- Department of Mathematics and Statistics University of South Florida Tampa FL USA
| |
Collapse
|
4
|
Liu J, Wang B, Hu W, Sun P, Li J, Duan H, Si J. Global and Local Panoramic Views for Gastroscopy: An Assisted Method of Gastroscopic Lesion Surveillance. IEEE Trans Biomed Eng 2015; 62:2296-307. [PMID: 25910000 DOI: 10.1109/tbme.2015.2424438] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Gastroscopy plays an important role in the diagnosis of gastric disease. In this paper, we develop an image panoramic system to assist endoscopists in improving lesion surveillance and reducing many of the tedious operations associated with gastroscopy. The constructed panoramic view has two categories: 1) the local view broadens the endoscopist's field of view in real time. Combining with the original gastroscopic video, this mosaicking view enables the endoscopist to diagnose the lesion comprehensively; 2) the global view constructs a large-area panoramic scene of the internal gastric surface, which can be used for intraoperative surgical navigation and postoperative scene review. Due to the irregular texture and inconsistent reflection of the gastric internal surface, common registration methods cannot accurately stitch this surface. Thereby, a six degree of freedom position tracking endoscope is devised to accommodate for the accumulated mosaicking error and provide efficient mosaicking results. For the global view, a dual-cube constraint model and a Bundle Adjustment algorithm are incorporated to deal with the mosaicking error caused by the irregular inflation and nonrigid deformation of the stomach. Moreover, texture blending and frame selection schemes are developed to make the mosaicking results feasible in real-clinical applications. The experimental results demonstrate that our system performs with a speed of 7.12 frames/s in a standard computer environment, and the mosaicking mean error is 0.43 mm for local panoramic view and 3.71 mm for global panoramic view.
Collapse
|
5
|
Bergen T, Wittenberg T. Stitching and Surface Reconstruction From Endoscopic Image Sequences: A Review of Applications and Methods. IEEE J Biomed Health Inform 2014; 20:304-21. [PMID: 25532214 DOI: 10.1109/jbhi.2014.2384134] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Endoscopic procedures form part of routine clinical practice for minimally invasive examinations and interventions. While they are beneficial for the patient, reducing surgical trauma and making convalescence times shorter, they make orientation and manipulation more challenging for the physician, due to the limited field of view through the endoscope. However, this drawback can be reduced by means of medical image processing and computer vision, using image stitching and surface reconstruction methods to expand the field of view. This paper provides a comprehensive overview of the current state of the art in endoscopic image stitching and surface reconstruction. The literature in the relevant fields of application and algorithmic approaches is surveyed. The technological maturity of the methods and current challenges and trends are analyzed.
Collapse
|
6
|
Yuille AL. Robust point matching via vector field consensus. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:1706-1721. [PMID: 24808341 PMCID: PMC5748387 DOI: 10.1109/tip.2014.2307478] [Citation(s) in RCA: 125] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.
Collapse
|
7
|
Grasa ÓG, Bernal E, Casado S, Gil I, Montiel JMM. Visual SLAM for Handheld Monocular Endoscope. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:135-46. [PMID: 24107925 DOI: 10.1109/tmi.2013.2282997] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.
Collapse
|
8
|
Yip MC, Lowe DG, Salcudean SE, Rohling RN, Nguan CY. Tissue tracking and registration for image-guided surgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:2169-2182. [PMID: 22899573 DOI: 10.1109/tmi.2012.2212718] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Vision-based tracking of tissue is a key component to enable augmented reality during a surgical operation. Conven- tional tracking techniques in computer vision rely on identifying strong edge features or distinctive textures in a well-lit environ- ment; however endoscopic tissue images do not have strong edge features, are poorly lit and exhibit a high degree of specular reflection. Therefore, prior work in achieving densely populated 3D features for describing tissue surface profiles require complex image processing techniques and have been limited in providing stable, long-term tracking or real-time processing. In this paper, we present an integrated framework for ac- curately tracking tissue in surgical stereo-cameras at real-time speeds. We use a combination of the STAR feature detector and Binary Robust Independent Elementary Features to acquire salient features that can be persistently tracked at high frame rates. The features are then used to acquire a densely-populated map of the deformations of tissue surface in 3D. We evaluate the method against popular feature algorithms in in-vivo animal study video sequences, and we also apply the proposed method to human partial nephrectomy video sequences. We extend the salient feature framework to support region tracking in order to maintain the spatial correspondence of a tracked region of tissue or a medical image registration to the surrounding tissue. In-vitro tissue studies show registration accuracies of 1.3-3.3 mm using a rigid-body transformation method.
Collapse
|
9
|
Wang H, Chin TJ, Suter D. Simultaneously fitting and segmenting multiple-structure data with outliers. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2012; 34:1177-1192. [PMID: 22064800 DOI: 10.1109/tpami.2011.216] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We propose a robust fitting framework, called Adaptive Kernel-Scale Weighted Hypotheses (AKSWH), to segment multiple-structure data even in the presence of a large number of outliers. Our framework contains a novel scale estimator called Iterative Kth Ordered Scale Estimator (IKOSE). IKOSE can accurately estimate the scale of inliers for heavily corrupted multiple-structure data and is of interest by itself since it can be used in other robust estimators. In addition to IKOSE, our framework includes several original elements based on the weighting, clustering, and fusing of hypotheses. AKSWH can provide accurate estimates of the number of model instances and the parameters and the scale of each model instance simultaneously. We demonstrate good performance in practical applications such as line fitting, circle fitting, range image segmentation, homography estimation, and two--view-based motion segmentation, using both synthetic data and real images.
Collapse
Affiliation(s)
- Hanzi Wang
- School of Information Science and Technology, Xiamen University, Fujian, 361005, China.
| | | | | |
Collapse
|
10
|
Mirota DJ, Wang H, Taylor RH, Ishii M, Gallia GL, Hager GD. A system for video-based navigation for endoscopic endonasal skull base surgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:963-976. [PMID: 22113772 DOI: 10.1109/tmi.2011.2176500] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Surgeries of the skull base require accuracy to safely navigate the critical anatomy. This is particularly the case for endoscopic endonasal skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures at the skull base. Today's navigation systems provide approximately 2 mm accuracy. Accuracy is limited by the indirect relationship of the navigation system, the image and the patient. We propose a method to directly track the position of the endoscope using video data acquired from the endoscope camera. Our method first tracks image feature points in the video and reconstructs the image feature points to produce 3D points, and then registers the reconstructed point cloud to a surface segmented from preoperative computed tomography (CT) data. After the initial registration, the system tracks image features and maintains the 2D-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present a method for validation of our system, which achieves submillimeter (0.70 mm mean) target registration error (TRE) results.
Collapse
Affiliation(s)
- Daniel J Mirota
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | | | | | | | |
Collapse
|
11
|
Hu M, Penney G, Figl M, Edwards P, Bello F, Casula R, Rueckert D, Hawkes D. Reconstruction of a 3D surface from video that is robust to missing data and outliers: Application to minimally invasive surgery using stereo and mono endoscopes. Med Image Anal 2012; 16:597-611. [DOI: 10.1016/j.media.2010.11.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2009] [Revised: 11/10/2010] [Accepted: 11/16/2010] [Indexed: 10/18/2022]
|
12
|
Allain B, Hu M, Lovat LB, Cook RJ, Vercauteren T, Ourselin S, Hawkes DJ. Re-localisation of a biopsy site in endoscopic images and characterisation of its uncertainty. Med Image Anal 2012; 16:482-96. [DOI: 10.1016/j.media.2011.11.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2010] [Revised: 11/22/2011] [Accepted: 11/22/2011] [Indexed: 11/30/2022]
|
13
|
|
14
|
|
15
|
Wang H, Mirota D, Hager GD. A generalized Kernel Consensus-based robust estimator. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2010; 32:178-84. [PMID: 19926908 PMCID: PMC2857599 DOI: 10.1109/tpami.2009.148] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, we present a new Adaptive-Scale Kernel Consensus (ASKC) robust estimator as a generalization of the popular and state-of-the-art robust estimators such as RANdom SAmple Consensus (RANSAC), Adaptive Scale Sample Consensus (ASSC), and Maximum Kernel Density Estimator (MKDE). The ASKC framework is grounded on and unifies these robust estimators using nonparametric kernel density estimation theory. In particular, we show that each of these methods is a special case of ASKC using a specific kernel. Like these methods, ASKC can tolerate more than 50 percent outliers, but it can also automatically estimate the scale of inliers. We apply ASKC to two important areas in computer vision, robust motion estimation and pose estimation, and show comparative results on both synthetic and real data.
Collapse
Affiliation(s)
- Hanzi Wang
- School of Computer Science, The University of Adelaide, Adelaide SA 5005, Australia.
| | - Daniel Mirota
- Department of Computer Science, The Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218.
| | - Gregory D. Hager
- Department of Computer Science, The Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218.
| |
Collapse
|
16
|
Fan Y, Meng MQH, Li B. 3D reconstruction of wireless capsule endoscopy images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2010; 2010:5149-5152. [PMID: 21095814 DOI: 10.1109/iembs.2010.5626182] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Wireless capsule endoscopy (WCE) has been gradually applied for inspecting the gastrointestinal (GI) tract. However, WCE can only provide monocular view. Moreover, only a small part of GI wall is visible frame by frame due to the limited illumination and irregular motion of the capsule endoscope. The perception of entire GI structure could be hard even for the experienced endoscopists. A realistic friendly three dimension view is needed to help the physicians to get a better perception of the GI tract. In this paper, we present a method to reconstruct the three dimension surface of the intestinal wall by applying the SIFT feature detector and descriptor to a sequence of WCE images. Epipolar geometry is employed to further constrain the matching feature points in order to obtain a more accurate 3D view. The experiments on real data are presented to show the performance of our proposed method.
Collapse
Affiliation(s)
- Yichen Fan
- Department of Electronic Engineering at the Chinese University of Hong Kong, China.
| | | | | |
Collapse
|
17
|
Abstract
Endoscopic endonasal skull base surgery (ESBS) requires high accuracy to ensure safe navigation of the critical anatomy at the anterior skull base. Current navigation systems provide approximately 2mm accuracy. This level of registration error is due in part from the indirect nature of tracking used. We propose a method to directly track the position of the endoscope using video data. Our method first reconstructs image feature points from video in 3D, and then registers the reconstructed point cloud to pre-operative data (e.g. CT/MRI). After the initial registration, the system tracks image features and maintains the 2D-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present registration results within 1mm, which matches the accuracy of our validation framework.
Collapse
|
18
|
Hager GD, Okamura AM, Kazanzides P, Whitcomb LL, Fichtinger G, Taylor RH. Surgical and Interventional Robotics: Part III: Surgical Assistance Systems. IEEE ROBOTICS & AUTOMATION MAGAZINE 2008; 15:84-93. [PMID: 20305740 PMCID: PMC2841438 DOI: 10.1109/mra.2008.930401] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
|