51
|
Coughlin G, Samavedi S, Palmer KJ, Patel VR. Role of image-guidance systems during NOTES. J Endourol 2009; 23:803-12. [PMID: 19438294 DOI: 10.1089/end.2008.0121] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Natural orifice translumenal endoscopic surgery (NOTES) is a developing field with the potential to revolutionize our approach to abdominal surgery. Performing operations via a flexible endoscope introduced through a natural orifice presents several challenges to physicians. Orientation and interpretation of the endoscopic video image can be difficult. The surgeon must also learn to operate with the camera and instruments "in line." Advances in technology are currently addressing the challenges of NOTES. Image-guided navigation could potentially provide invaluable assistance during NOTES. Real-time information on spatial positioning and orientation as well as assistance with the identification of anatomy and localization of pathology are some of the possibilities. Image-guided surgery has become commonplace in disciplines such as neurosurgery where the anatomy is relatively rigid. To become widespread in intra-abdominal procedures and NOTES, advances that will allow systems to adapt to moving and deforming anatomy are needed. This article reviews the basics of image-guided surgery, the various image-guided systems, and their potential application to NOTES.
Collapse
Affiliation(s)
- Geoff Coughlin
- Global Robotics Institute, Florida Hospital Celebration Health, Orlando, 34747, USA.
| | | | | | | |
Collapse
|
52
|
Vikal S, U-Thainual P, Carrino JA, Iordachita I, Fischer GS, Fichtinger G. Perk Station--Percutaneous surgery training and performance measurement platform. Comput Med Imaging Graph 2009; 34:19-32. [PMID: 19539446 DOI: 10.1016/j.compmedimag.2009.05.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2009] [Revised: 04/27/2009] [Accepted: 05/05/2009] [Indexed: 01/23/2023]
Abstract
MOTIVATION Image-guided percutaneous (through the skin) needle-based surgery has become part of routine clinical practice in performing procedures such as biopsies, injections and therapeutic implants. A novice physician typically performs needle interventions under the supervision of a senior physician; a slow and inherently subjective training process that lacks objective, quantitative assessment of the surgical skill and performance. Shortening the learning curve and increasing procedural consistency are important factors in assuring high-quality medical care. METHODS This paper describes a laboratory validation system, called Perk Station, for standardized training and performance measurement under different assistance techniques for needle-based surgical guidance systems. The initial goal of the Perk Station is to assess and compare different techniques: 2D image overlay, biplane laser guide, laser protractor and conventional freehand. The main focus of this manuscript is the planning and guidance software system developed on the 3D Slicer platform, a free, open source software package designed for visualization and analysis of medical image data. RESULTS The prototype Perk Station has been successfully developed, the associated needle insertion phantoms were built, and the graphical user interface was fully implemented. The system was inaugurated in undergraduate teaching and a wide array of outreach activities. Initial results, experiences, ongoing activities and future plans are reported.
Collapse
|
53
|
Widmann G, Stoffner R, Bale R. Errors and error management in image-guided craniomaxillofacial surgery. ACTA ACUST UNITED AC 2009; 107:701-15. [DOI: 10.1016/j.tripleo.2009.02.011] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2008] [Revised: 02/05/2009] [Accepted: 02/05/2009] [Indexed: 12/15/2022]
|
54
|
Ammi M, Fremont V, Ferreira A. Automatic Camera-Based Microscope Calibration for a Telemicromanipulation System Using a Virtual Pattern. IEEE T ROBOT 2009. [DOI: 10.1109/tro.2008.2006866] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
55
|
Sielhorst T, Feuerstein M, Navab N. Advanced Medical Displays: A Literature Review of Augmented Reality. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/jdt.2008.2001575] [Citation(s) in RCA: 201] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
56
|
Hager GD, Okamura AM, Kazanzides P, Whitcomb LL, Fichtinger G, Taylor RH. Surgical and Interventional Robotics: Part III: Surgical Assistance Systems. IEEE ROBOTICS & AUTOMATION MAGAZINE 2008; 15:84-93. [PMID: 20305740 PMCID: PMC2841438 DOI: 10.1109/mra.2008.930401] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
|
57
|
Traub J, Sielhorst T, Heining SM, Navab N. Advanced Display and Visualization Concepts for Image Guided Surgery. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/jdt.2008.2006510] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
58
|
Hummel J, Figl M, Bax M, Bergmann H, Birkfellner W. 2D/3D registration of endoscopic ultrasound to CT volume data. Phys Med Biol 2008; 53:4303-16. [PMID: 18653922 DOI: 10.1088/0031-9155/53/16/006] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
This paper describes a computer-aided navigation system using image fusion to support endoscopic interventions such as the accurate collection of biopsy specimens. An endoscope provides the physician with real-time ultrasound (US) and a video image. An image slice that corresponds to the corresponding image from the US scan head is derived from a preoperative computed tomography (CT) or magnetic resonance image volume data set using oblique reformatting and displayed side by side with the US image. The position of the image acquired by the US scan head is determined by a miniaturized electromagnetic tracking system (EMTS) after calibrating the endoscope's scan head. The transformation between the patient coordinate system and the preoperative data set is calculated using a 2D/3D registration. This is achieved by calibrating an intraoperative interventional CT slice with an optical tracking system (OTS) using the same algorithm as for the US calibration. The slice is then used for 2D/3D registration with the coordinate system of the preoperative volume. The fiducial registration error (FRE) for the US calibration was 2.0 mm +/- 0.4 mm; the interventional CT FRE was 0.36 +/- 0.12 mm; and the 2D/3D registration target registration error (TRE) was 1.8 +/- 0.3 mm. The point-to-point registration between the OTS and the EMTS had an FRE of 0.9 +/- 0.4 mm. Finally, we found an overall TRE for the complete system to be 3.9 +/- 0.6 mm.
Collapse
Affiliation(s)
- Johann Hummel
- Center of Biomedical Engineering and Physics, Medical University of Vienna, Vienna, Austria.
| | | | | | | | | |
Collapse
|
59
|
Olszewski R, Villamil MB, Trevisan DG, Nedel LP, Freitas CMDS, Reychler H, Macq B. Towards an integrated system for planning and assisting maxillofacial orthognathic surgery. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2008; 91:13-21. [PMID: 18417245 DOI: 10.1016/j.cmpb.2008.02.007] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2007] [Revised: 01/10/2008] [Accepted: 02/19/2008] [Indexed: 05/26/2023]
Abstract
Computer-assisted maxillofacial orthognathic surgery is an emerging and interdisciplinary field linking orthognathic surgery, remote signal engineering and three-dimensional (3D) medical imaging. Most of the computational solutions already developed make use of different specialized systems which introduce difficulties both in the information transfer from one stage to the others and in the use of such systems by surgeons. Trying to address such issue, in this work we present a common computer-based system that integrates proposed modules for planning and assisting the maxillofacial surgery. With that we propose to replace the current standard orthognathic preoperative planning, and to bring information from a virtual planning to the real operative field. The system prototype, including three-dimensional cephalometric analysis, static and dynamic virtual orthognathic planning, and mixed reality transfer of information to the operation room, is described and the first results obtained are presented.
Collapse
Affiliation(s)
- Raphael Olszewski
- Université catholique de Louvain, Saint Luc University Clinics, Department of Oral and Maxillofacial Surgery, Brussels, Belgium.
| | | | | | | | | | | | | |
Collapse
|
60
|
García J, Thoranaghatte R, Marti G, Zheng G, Caversaccio M, González Ballester MA. Calibration of a surgical microscope with automated zoom lenses using an active optical tracker. Int J Med Robot 2008; 4:87-93. [PMID: 18275035 DOI: 10.1002/rcs.180] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND In this paper, we present a new method for the calibration of a microscope and its registration using an active optical tracker. METHODS Practically, both operations are done simultaneously by moving an active optical marker within the field of view of the two devices. The IR LEDs composing the marker are first segmented from the microscope images. By knowing their corresponding three-dimensional (3D) position in the optical tracker reference system, it is possible to find the transformation matrix between the referential of the two devices. Registration and calibration parameters can be extracted directly from that transformation. In addition, since the zoom and focus can be modified by the surgeon during the operation, we propose a spline based method to update the camera model to the new setup. RESULTS The proposed technique is currently being used in an augmented reality system for image-guided surgery in the fields of ear, nose and throat (ENT) and craniomaxillofacial surgeries. CONCLUSIONS The results have proved to be accurate and the technique is a fast, dynamic and reliable way to calibrate and register the two devices in an OR environment.
Collapse
Affiliation(s)
- Jaime García
- MEM Research Centre, University of Bern, Switzerland.
| | | | | | | | | | | |
Collapse
|
61
|
Pfisterer WK, Papadopoulos S, Drumm DA, Smith K, Preul MC. Fiducial versus nonfiducial neuronavigation registration assessment and considerations of accuracy. Neurosurgery 2008; 62:201-7; discussion 207-8. [PMID: 18424987 DOI: 10.1227/01.neu.0000317394.14303.99] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2025] Open
Abstract
OBJECTIVE For frameless stereotaxy, users can choose between anatomic landmarks (ALs) or surface fiducial markers (FMs) for their match points during registration to define an alignment of the head in the physical and radiographic image space. In this study, we sought to determine the concordance among a point-merged FM registration, a point-merged AL registration, and a combined point-merged anatomic/surface-merged (SM) registration, i.e., to determine the accuracy of registration techniques with and without FMs by examining the extent of agreement between the system-generated predicted value and physical measured values. METHODS We examined 30 volunteers treated with gamma knife surgery. The frameless stereotactic image-guidance system called the StealthStation (Medtronic Surgical Navigation Technologies, Louisville, CO) was used. Nine FMs were placed on the patient's head and four were placed on a Leksell frame rod-box, which acted as a rigid set to determine the difference in error. For each registration form, we recorded the generated measurement (GM) and the physical measurement (PM) to each of the four checkpoint FMs. Bland and Altman plot difference analyses were used to compare measurement techniques. Correlations and descriptive analyses were completed. RESULTS The mean of values for GMs were 1.14 mm for FM, 2.3 mm for AL, and 0.96 mm for SM registrations. The mean errors of the checkpoints were 3.49 mm for FM, 3.96 mm for AL, and 3.33 mm for SM registrations. The correlation between GMs and PMs indicated a linear relationship for all three methods. AL registration demonstrated the greatest mean difference, followed by FM registration; SM registration had the smallest difference between GMs and PMs. Differences in the anatomic registration methods, including SM registration, compared with FM registration were within a mean +/- 1.96 (standard deviation) according to the Bland and Altman analysis. CONCLUSION For our sample of 30 patients, all three registration methods provided comparable distances to the target tissue for surgical procedures. Users may safely choose anatomic registration as a less costly and more time-efficient registration method for frameless stereotaxy.
Collapse
Affiliation(s)
- Wolfgang K Pfisterer
- Neurosurgery Research Laboratory, Division of Neurological Surgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, Arizona, USA
| | | | | | | | | |
Collapse
|
62
|
Lapeer R, Chen MS, Gonzalez G, Linney A, Alusi G. Image-enhanced surgical navigation for endoscopic sinus surgery: evaluating calibration, registration and tracking. Int J Med Robot 2008; 4:32-45. [DOI: 10.1002/rcs.175] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
63
|
Nakamoto M, Nakada K, Sato Y, Konishi K, Hashizume M, Tamura S. Intraoperative magnetic tracker calibration using a magneto-optic hybrid tracker for 3-D ultrasound-based navigation in laparoscopic surgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:255-270. [PMID: 18334447 DOI: 10.1109/tmi.2007.911003] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
This paper describes a ultrasound (3-D US) system that aims to achieve augmented reality (AR) visualization during laparoscopic surgery, especially for the liver. To acquire 3-D US data of the liver, the tip of a laparoscopic ultrasound probe is tracked inside the abdominal cavity using a magnetic tracker. The accuracy of magnetic trackers, however, is greatly affected by magnetic field distortion that results from the close proximity of metal objects and electronic equipment, which is usually unavoidable in the operating room. In this paper, we describe a calibration method for intraoperative magnetic distortion that can be applied to laparoscopic 3-D US data acquisition; we evaluate the accuracy and feasibility of the method by in vitro and in vivo experiments. Although calibration data can be acquired freehand using a magneto-optic hybrid tracker, there are two problems associated with this method--error caused by the time delay between measurements of the optical and magnetic trackers, and instability of the calibration accuracy that results from the uniformity and density of calibration data. A temporal calibration procedure is developed to estimate the time delay, which is then integrated into the calibration, and a distortion model is formulated by zeroth-degree to fourth-degree polynomial fitting to the calibration data. In the in vivo experiment using a pig, the positional error caused by magnetic distortion was reduced from 44.1 to 2.9 mm. The standard deviation of corrected target positions was less than 1.0 mm. Freehand acquisition of calibration data was performed smoothly using a magneto-optic hybrid sampling tool through a trocar under guidance by realtime 3-D monitoring of the tool trajectory; data acquisition time was less than 2 min. The present study suggests that our proposed method could correct for magnetic field distortion inside the patient's abdomen during a laparoscopic procedure within a clinically permissible period of time, as well as enabling an accurate 3-D US reconstruction to be obtained that can be superimposed onto live endoscopic images.
Collapse
Affiliation(s)
- Masahiko Nakamoto
- Division of Image Analysis, Osaka University Graduate School of Medicine, Osaka, Japan
| | | | | | | | | | | |
Collapse
|
64
|
Lerotic M, Chung AJ, Mylonas G, Yang GZ. Pq-space based non-photorealistic rendering for augmented reality. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2007; 10:102-9. [PMID: 18044558 DOI: 10.1007/978-3-540-75759-7_13] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The increasing use of robotic assisted minimally invasive surgery (MIS) provides an ideal environment for using Augmented Reality (AR) for performing image guided surgery. Seamless synthesis of AR depends on a number of factors relating to the way in which virtual objects appear and visually interact with a real environment. Traditional overlaid AR approaches generally suffer from a loss of depth perception. This paper presents a new AR method for robotic assisted MIS, which uses a novel pq-space based non-photorealistic rendering technique for providing see-through vision of the embedded virtual object whilst maintaining salient anatomical details of the exposed anatomical surface. Experimental results with both phantom and in vivo lung lobectomy data demonstrate the visual realism achieved for the proposed method and its accuracy in providing high fidelity AR depth perception.
Collapse
Affiliation(s)
- Mirna Lerotic
- Institute of Biomedical Engineering, Imperial College, London SW7 2AZ, UK.
| | | | | | | |
Collapse
|
65
|
Voruganti A, Mayoral R, Jacobs S, Grunert R, Moeckel H, Korb W. Surgical Cartographic Navigation System for Endoscopic Bypass Grafting. ACTA ACUST UNITED AC 2007; 2007:1467-70. [DOI: 10.1109/iembs.2007.4352577] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
66
|
A real-time navigation system for laparoscopic surgery based on three-dimensional ultrasound using magneto-optic hybrid tracking configuration. Int J Comput Assist Radiol Surg 2007. [DOI: 10.1007/s11548-007-0078-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
67
|
Giraldez JG, Caversaccio M, Pappas I, Kowal J, Rohrer U, Marti G, Baur C, Nolte LP, Ballester MAG. Design and clinical evaluation of an image-guided surgical microscope with an integrated tracking system. Int J Comput Assist Radiol Surg 2007. [DOI: 10.1007/s11548-006-0066-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
68
|
Widmann G. Image-guided surgery and medical robotics in the cranial area. Biomed Imaging Interv J 2007; 3:e11. [PMID: 21614255 PMCID: PMC3097655 DOI: 10.2349/biij.3.1.e11] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2006] [Accepted: 02/21/2007] [Indexed: 11/17/2022] Open
Abstract
Surgery in the cranial area includes complex anatomic situations with high-risk structures and high demands for functional and aesthetic results. Conventional surgery requires that the surgeon transfers complex anatomic and surgical planning information, using spatial sense and experience. The surgical procedure depends entirely on the manual skills of the operator. The development of image-guided surgery provides new revolutionary opportunities by integrating presurgical 3D imaging and intraoperative manipulation. Augmented reality, mechatronic surgical tools, and medical robotics may continue to progress in surgical instrumentation, and ultimately, surgical care. The aim of this article is to review and discuss state-of-the-art surgical navigation and medical robotics, image-to-patient registration, aspects of accuracy, and clinical applications for surgery in the cranial area.
Collapse
Affiliation(s)
- G Widmann
- Department of Radiology, Innsbruck Medical University, Anichstr, Austria
| |
Collapse
|
69
|
Vogt S, Khamene A, Sauer F. Reality Augmentation for Medical Procedures: System Architecture, Single Camera Marker Tracking, and System Evaluation. Int J Comput Vis 2006. [DOI: 10.1007/s11263-006-7938-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
70
|
Delingette H, Pennec X, Soler L, Marescaux J, Ayache N. Computational Models for Image-Guided Robot-Assisted and Simulated Medical Interventions. PROCEEDINGS OF THE IEEE 2006; 94:1678-1688. [DOI: 10.1109/jproc.2006.880718] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2025]
|
71
|
Abstract
Contemporary imaging modalities can now provide the surgeon with high quality three- and four-dimensional images depicting not only normal anatomy and pathology, but also vascularity and function. A key component of image-guided surgery (IGS) is the ability to register multi-modal pre-operative images to each other and to the patient. The other important component of IGS is the ability to track instruments in real time during the procedure and to display them as part of a realistic model of the operative volume. Stereoscopic, virtual- and augmented-reality techniques have been implemented to enhance the visualization and guidance process. For the most part, IGS relies on the assumption that the pre-operatively acquired images used to guide the surgery accurately represent the morphology of the tissue during the procedure. This assumption may not necessarily be valid, and so intra-operative real-time imaging using interventional MRI, ultrasound, video and electrophysiological recordings are often employed to ameliorate this situation. Although IGS is now in extensive routine clinical use in neurosurgery and is gaining ground in other surgical disciplines, there remain many drawbacks that must be overcome before it can be employed in more general minimally-invasive procedures. This review overviews the roots of IGS in neurosurgery, provides examples of its use outside the brain, discusses the infrastructure required for successful implementation of IGS approaches and outlines the challenges that must be overcome for IGS to advance further.
Collapse
Affiliation(s)
- Terry M Peters
- Robarts Research Institute, University of Western Ontario, PO Box 5015, 100 Perth Drive, London, ON N6A 5K8, Canada.
| |
Collapse
|
72
|
Hirai N, Kosaka A, Kawamata T, Hori T, Iseki H. Image-guided neurosurgery system integrating AR-based navigation and open-MRI monitoring. ACTA ACUST UNITED AC 2006; 10:59-71. [PMID: 16298917 DOI: 10.3109/10929080500229389] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
As endoscopic surgery has become a popular form of minimally invasive surgery, it increasingly requires useful imaging tools to help the surgeons perform safe and secure operations. Our navigation system provides surgeons with visual information by overlaying 3D wire frame models of tumor onto live images, as well as by displaying relative the positions of surgical tools and the target tumor. Such 3D wire frame models are generated from pre-operative CT/MR images with the help of a 3D surgical simulation software. Another important function of our system is real-time volume rendering of intra-operative MR images for the target tumor. This function allows surgeons to carefully observe the vicinity of the tumor regions to be removed, by rendering the sectional views with respect to the surgical tool position, so that surgical performance can be easily monitored during the operation. We tested this navigation system in more than 10 clinical operations and verified the effectiveness of the navigation and surgical performance.
Collapse
Affiliation(s)
- Nobuyuki Hirai
- Institute of Advanced Biomedical Engineering and Science, Graduate School of Medicine, Tokyo Women's Medical University, Japan.
| | | | | | | | | |
Collapse
|
73
|
Yamaguchi T, Nakamoto M, Sato Y, Konishi K, Hashizume M, Sugano N, Yoshikawa H, Tamura S. Development of a camera model and calibration procedure for oblique-viewing endoscopes. ACTA ACUST UNITED AC 2006; 9:203-14. [PMID: 16192062 DOI: 10.3109/10929080500163505] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Oblique-viewing endoscopes (oblique scopes) are widely used in medical practice. They are essential for certain procedures such as laparoscopy, arthroscopy and sinus endoscopy. In an oblique scope the viewing directions are changeable by rotating the scope cylinder. Although a camera calibration method is necessary to apply augmented reality technologies to oblique endoscopic procedures, no method for oblique scope calibration has yet been developed. In the present paper, we formulate a camera model and a calibration procedure for oblique scopes. In the calibration procedure, Tsai's calibration is performed at zero rotation of the scope cylinder, then the variation of the external camera parameters corresponding to the rotation of the scope cylinder is modeled and estimated as a function of the rotation angle. Accurate estimation of the rotational axis is included in the procedure. The accuracy of this estimation was demonstrated to have a significant effect on overall calibration accuracy in the experimental evaluation, especially with large rotation angles. The projection error in the image plane was approximately two pixels. The proposed method was shown to be clinically applicable.
Collapse
Affiliation(s)
- Tetsuzo Yamaguchi
- Division of Interdisciplinary Image Analysis, Osaka University Graduate School of Medicine, Osaka, Japan
| | | | | | | | | | | | | | | |
Collapse
|
74
|
Wörn H, Aschke M, Kahrs LA. New augmented reality and robotic based methods for head-surgery. Int J Med Robot 2006; 1:49-56. [PMID: 17518390 DOI: 10.1002/rcs.27] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Within the framework of the collaborative research centre "Information Technology in Medicine--Computer and Sensor-Aided Surgery" (SFB414) new methods for intraoperative computer assistance of surgical procedures are being developed. The developed tools will be controlled by an intraoperative host which provides interfaces to the electronic health record (EHR) and intraoperative computer assisted instruments.The interaction is based on standardised communication protocols. Plug & work functions will allow easy integration and configuration of new components. Intraoperative systems currently under development are intraoperative augmented reality (AR) using a projector and via a microscope, a planning system for definition of complex trajectories and a surgical robot system. The developed systems are under clinical evaluation and showing promising results in their application.
Collapse
Affiliation(s)
- H Wörn
- Universität Karlsruhe (TH), Germany.
| | | | | |
Collapse
|
75
|
Traub J, Stefan P, Heining SM, Sielhorst T, Riquarts C, Euler E, Navab N. Hybrid navigation interface for orthopedic and trauma surgery. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2006; 9:373-80. [PMID: 17354912 DOI: 10.1007/11866565_46] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Several visualization methods for intraoperative navigation systems were proposed in the past. In standard slice based navigation, three dimensional imaging data is visualized on a two dimensional user interface in the surgery room. Another technology is the in-situ visualization i.e. the superimposition of imaging data directly into the view of the surgeon, spatially registered with the patient. Thus, the three dimensional information is represented on a three dimensional interface. We created a hybrid navigation interface combining an augmented reality visualization system, which is based on a stereoscopic head mounted display, with a standard two dimensional navigation interface. Using an experimental setup, trauma surgeons performed a drilling task using the standard slice based navigation system, different visualization modes of an augmented reality system, and the combination of both. The integration of a standard slice based navigation interface into an augmented reality visualization overcomes the shortcomings of both systems.
Collapse
|
76
|
Paul P, Fleig O, Jannin P. Augmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: methods and performance evaluation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1500-11. [PMID: 16279086 DOI: 10.1109/tmi.2005.857029] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Displaying anatomical and physiological information derived from preoperative medical images in the operating room is critical in image-guided neurosurgery. This paper presents a new approach referred to as augmented virtuality (AV) for displaying intraoperative views of the operative field over three-dimensional (3-D) multimodal preoperative images onto an external screen during surgery. A calibrated stereovision system was set up between the surgical microscope and the binocular tubes. Three-dimensional surface meshes of the operative field were then generated using stereopsis. These reconstructed 3-D surface meshes were directly displayed without any additional geometrical transform over preoperative images of the patient in the physical space. Performance evaluation was achieved using a physical skull phantom. Accuracy of the reconstruction method itself was shown to be within 1 mm (median: 0.76 mm +/- 0.27), whereas accuracy of the overall approach was shown to be within 3 mm (median: 2.29 mm +/- 0.59), including the image-to-physical space registration error. We report the results of six surgical cases where AV was used in conjunction with augmented reality. AV not only enabled vision beyond the cortical surface but also gave an overview of the surgical area. This approach facilitated understanding of the spatial relationship between the operative field and the preoperative multimodal 3-D images of the patient.
Collapse
Affiliation(s)
- Perrine Paul
- Laboratoire IDM, Faculté de Médecine, 35043 Rennes Cedex, France.
| | | | | |
Collapse
|
77
|
Figl M, Ede C, Hummel J, Wanschitz F, Ewers R, Bergmann H, Birkfellner W. A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1492-9. [PMID: 16279085 DOI: 10.1109/tmi.2005.856746] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment.
Collapse
Affiliation(s)
- Michael Figl
- Center for Biomedical Engineering and Physics, Medical University of Vienna, A-1090 Vienna, Austria.
| | | | | | | | | | | | | |
Collapse
|
78
|
Labadie RF, Chodhury P, Cetinkaya E, Balachandran R, Haynes DS, Fenlon MR, Jusczyzck AS, Fitzpatrick JM. Minimally invasive, image-guided, facial-recess approach to the middle ear: demonstration of the concept of percutaneous cochlear access in vitro. Otol Neurotol 2005; 26:557-62. [PMID: 16015146 DOI: 10.1097/01.mao.0000178117.61537.5b] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS Image-guided surgery will permit accurate access to the middle ear via the facial recess using a single drill hole from the lateral aspect of the mastoid cortex. BACKGROUND The widespread use of image-guided methods in otologic surgery has been limited by the need for a system that achieves the necessary level of accuracy with an easy-to-use, noninvasive fiducial marker system. We have developed and recently reported such a system (accuracy within the temporal bone = 0.76 +/- 0.23 mm; n = 234 measurements). With this system, image-guided otologic surgery is feasible. METHODS Skulls (n = 2) were fitted with a dental bite-block affixed fiducial frame and scanned by computed tomography using standard temporal-bone algorithms. The frame was removed and replaced with an infrared emitter used to track the skull during dissection. Tracking was accomplished using an infrared tracker and commercially available software. Using this system in conjunction with a tracked otologic drill, the middle ear was approached via the facial recess using a single drill hole from the lateral aspect of the mastoid cortex. The path of the drill was verified by subsequently performing a traditional temporal bone dissection, preserving the tunnel of bone through which the drill pass had been made. RESULTS An accurate approach to the middle ear via the facial recess was achieved without violating the canal of the facial nerve, the horizontal semicircular canal, or the external auditory canal. CONCLUSIONS Image-guided otologic surgery provides access to the cochlea via the facial recess in a minimally invasive, percutaneous fashion. While the present study was confined to in vitro demonstration, these exciting results warrant in vivo testing, which may lead to clinically applicable access.
Collapse
Affiliation(s)
- Robert F Labadie
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee 37232-8605, USA.
| | | | | | | | | | | | | | | |
Collapse
|
79
|
Nijmeh AD, Goodger NM, Hawkes D, Edwards PJ, McGurk M. Image-guided navigation in oral and maxillofacial surgery. Br J Oral Maxillofac Surg 2005; 43:294-302. [PMID: 15993282 DOI: 10.1016/j.bjoms.2004.11.018] [Citation(s) in RCA: 72] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2003] [Accepted: 11/13/2004] [Indexed: 11/22/2022]
Abstract
Image-guided surgery is the logical extension of imaging as it integrates previously acquired radiological or nuclear medicine images with the operative field. In conventional image-guided surgery, a surgeon uses a surgical instrument or a pointer to establish correspondence between features in the preoperative images and the surgical scene. This is not ideal because the surgeon has to look away from the operative field to view the data. Augmented reality guidance systems offer a solution to this problem but are limited by deformation of soft tissues. Real-time intraoperative imaging offers a potential solution but is currently only experimental. The additional precision and confidence that this technology provides make it a useful tool, and recent advances in image-guided surgery offer new opportunities in the field of oral and maxillofacial surgery. Here, we review the development, current technologies, and applications of image-guided surgery and illustrate them with two case reports.
Collapse
Affiliation(s)
- A D Nijmeh
- Department of Radiological Sciences, Guy's Hospital, Floor 23 Guy's Tower, St. Thomas Street, London SE1 9RT, UK.
| | | | | | | | | |
Collapse
|
80
|
Mayberg MR, LaPresto E, Cunningham EJ. Image-guided endoscopy: description of technique and potential applications. Neurosurg Focus 2005; 19:E10. [PMID: 16078813 DOI: 10.3171/foc.2005.19.1.11] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Object
Neuroendoscopic approaches to lesions of the central nervous system and spine are limited by the loss of stereoscopic vision and high-fidelity image quality inherent in the operating microscope. Image-guided endoscopy (IGE) and image-guided surgery (IGS) have the potential to overcome these limitations. The goal of this study was to evaluate IGE for its potential applications in neurosurgery.
Methods.
To determine the feasibility of IGE, a rigid endoscope was tracked using an IGS system that provided navigational data for the endoscope tip and trajectory as well as a computer-generated, three-dimensional, virtual representation of the image provided by the endoscope.
The IGE procedure was successfully completed in 14 patients (nine with pituitary adenomas, one with a temporal cavernous malformation, and four with unruptured aneurysms). No complications could be attributed to the procedure. Compared with direct microscopy performed using anatomical landmarks, registration of the endoscope, and virtual image were highly accurate.
Conclusions
This procedure offers many potential advantages for central nervous system and spinal endoscopy. Advances in IGE may enable its application to regions outside the central nervous system as well.
Collapse
Affiliation(s)
- Marc R Mayberg
- Seattle Neuroscience Institute, Seattle, Washington 98104, USA.
| | | | | |
Collapse
|
81
|
Hawkes DJ, Barratt D, Blackall JM, Chan C, Edwards PJ, Rhode K, Penney GP, McClelland J, Hill DLG. Tissue deformation and shape models in image-guided interventions: a discussion paper. Med Image Anal 2004; 9:163-75. [PMID: 15721231 DOI: 10.1016/j.media.2004.11.007] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
This paper promotes the concept of active models in image-guided interventions. We outline the limitations of the rigid body assumption in image-guided interventions and describe how intraoperative imaging provides a rich source of information on spatial location of anatomical structures and therapy devices, allowing a preoperative plan to be updated during an intervention. Soft tissue deformation and variation from an atlas to a particular individual can both be determined using non-rigid registration. Established methods using free-form deformations have a very large number of degrees of freedom. Three examples of deformable models--motion models, biomechanical models and statistical shape models--are used to illustrate how prior information can be used to restrict the number of degrees of freedom of the registration algorithm and thus provide active models for image-guided interventions. We provide preliminary results from applications for each type of model.
Collapse
Affiliation(s)
- D J Hawkes
- Division of Imaging Sciences, GKT School of Medicine, King's College London, UK.
| | | | | | | | | | | | | | | | | |
Collapse
|
82
|
Liao H, Hata N, Nakajima S, Iwahara M, Sakuma I, Dohi T. Surgical Navigation by Autostereoscopic Image Overlay of Integral Videography. ACTA ACUST UNITED AC 2004; 8:114-21. [PMID: 15217256 DOI: 10.1109/titb.2004.826734] [Citation(s) in RCA: 75] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper describes an autostereoscopic image overlay technique that is integrated into a surgical navigation system to superimpose a real three-dimensional (3-D) image onto the patient via a half-silvered mirror. The images are created by employing a modified version of integral videography (IV), which is an animated extension of integral photography. IV records and reproduces 3-D images using a microconvex lens array and flat display; it can display geometrically accurate 3-D autostereoscopic images and reproduce motion parallax without the need for special devices. The use of semitransparent display devices makes it appear that the 3-D image is inside the patient's body. This is the first report of applying an autostereoscopic display with an image overlay system in surgical navigation. Experiments demonstrated that the fast IV rendering technique and patient-image registration method produce an average registration accuracy of 1.13 mm. Experiments using a target in phantom agar showed that the system can guide a needle toward a target with an average error of 2.6 mm. Improvement in the quality of the IV display will make this system practical and its use will increase surgical accuracy and reduce invasiveness.
Collapse
Affiliation(s)
- Hongen Liao
- Graduate School of Information Technology Science, The University of Tokyo, Tokyo 113-8656, Japan.
| | | | | | | | | | | |
Collapse
|
83
|
Edwards PJ, Johnson LG, Hawkes DJ, Fenlon MR, Strong AJ, Gleeson MJ. Clinical Experience and Perception in Stereo Augmented Reality Surgical Navigation. LECTURE NOTES IN COMPUTER SCIENCE 2004. [DOI: 10.1007/978-3-540-28626-4_45] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
84
|
Yamaguchi T, Nakamoto M, Sato Y, Konishi K, Hashizume M, Sugano N, Yoshikawa H, Tamura S. Development of a camera model and calibration procedure for oblique-viewing endoscopes †. ACTA ACUST UNITED AC 2004. [DOI: 10.1080/10929080500163505] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
85
|
Rhode KS, Hill DLG, Edwards PJ, Hipwell J, Rueckert D, Sanchez-Ortiz G, Hegde S, Rahunathan V, Razavi R. Registration and tracking to integrate X-ray and MR images in an XMR facility. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:1369-1378. [PMID: 14606671 DOI: 10.1109/tmi.2003.819275] [Citation(s) in RCA: 73] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We describe a registration and tracking technique to integrate cardiac X-ray images and cardiac magnetic resonance (MR) images acquired from a combined X-ray and MR interventional suite (XMR). Optical tracking is used to determine the transformation matrices relating MR image coordinates and X-ray image coordinates. Calibration of X-ray projection geometry and tracking of the X-ray C-arm and table enable three-dimensional (3-D) reconstruction of vessel centerlines and catheters from bi-plane X-ray views. We can, therefore, combine single X-ray projection images with registered projection MR images from a volume acquisition, and we can also display 3-D reconstructions of catheters within a 3-D or multi-slice MR volume. Registration errors were assessed using phantom experiments. Errors in the combined projection images (two-dimensional target registration error--TRE) were found to be 2.4 to 4.2 mm, and the errors in the integrated volume representation (3-D TRE) were found to be 4.6 to 5.1 mm. These errors are clinically acceptable for alignment of images of the great vessels and the chambers of the heart. Results are shown for two patients. The first involves overlay of a catheter used for invasive pressure measurements on an MR volume that provides anatomical context. The second involves overlay of invasive electrode catheters (including a basket catheter) on a tagged MR volume in order to relate electrophysiology to myocardial motion in a patient with an arrhythmia. Visual assessment of these results suggests the errors were of a similar magnitude to those obtained in the phantom measurements.
Collapse
Affiliation(s)
- Kawal S Rhode
- Division of Imaging Sciences, Guy's, King's & St Thomas' School of Medicine, King's College London, Guy's Hospital, London SE1 9RT, UK
| | | | | | | | | | | | | | | | | |
Collapse
|
86
|
Abstract
Computer-assisted surgery (CAS) utilizing robotic or image-guided technologies has been introduced into various orthopedic fields. Navigation and robotic systems are the most advanced parts of CAS, and their range of functions and applications is increasing. Surgical navigation is a visualization system that gives positional information about surgical tools or implants relative to a target organ (bone) on a computer display. There are three types of surgical planning that involve navigation systems. One makes use of volumetric images, such as computed tomography, magnetic resonance imaging, or ultrasound echograms. Another makes use of intraoperative fluoroscopic images. The last type makes use of kinetic information about joints or morphometric information about the target bones obtained intraoperatively. Systems that involve these planning methods are called volumetric image-based navigation, fluoroscopic navigation, and imageless navigation, respectively. To overcome the inaccuracy of hand-controlled positioning of surgical tools, three robotic systems have been developed. One type directs a cutting guide block or a drilling guide sleeve, with surgeons sliding a bone saw or a drill bit through the guide instrument to execute a surgical action. Another type constrains the range of movement of a surgical tool held by a robot arm such as ACROBOT. The last type is an active system, such as ROBODOC or CASPAR, which directs a milling device automatically according to preoperative planning. These CAS systems, their potential, and their limitations are reviewed here. Future technologies and future directions of CAS that will help provide improved patient outcomes in a cost-effective manner are also discussed.
Collapse
Affiliation(s)
- Nobuhiko Sugano
- Department of Orthopaedic Surgery, Osaka Graduate School of Medicine, 2-2 Yamadaoka, Suita 565-0871, Japan
| |
Collapse
|
87
|
Hawkes DJ, Hill DLG. Medical imaging at Guy's Hospital, King's College London. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:1033-1041. [PMID: 12956259 DOI: 10.1109/tmi.2003.815866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
|
88
|
|
89
|
Labadie R, Fenlon M, Cevikalp H, Harris S, Galloway R, Fitzpatrick J. Image-guided otologic surgery. ACTA ACUST UNITED AC 2003. [DOI: 10.1016/s0531-5131(03)00273-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
90
|
Aschke M, Wirtz C, Raczkowsky J, Wörn H, Kunze S. Stereoscopic augmented reality for operating microscopes. ACTA ACUST UNITED AC 2003. [DOI: 10.1016/s0531-5131(03)00271-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
91
|
|
92
|
Birkfellner W, Figl M, Matula C, Hummel J, Hanel R, Imhof H, Wanschitz F, Wagner A, Watzinger F, Bergmann H. Computer-enhanced stereoscopic vision in a head-mounted operating binocular. Phys Med Biol 2003; 48:N49-57. [PMID: 12608617 DOI: 10.1088/0031-9155/48/3/402] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Based on the Varioscope, a commercially available head-mounted operating binocular, we have developed the Varioscope AR, a see through head-mounted display (HMD) for augmented reality visualization that seamlessly fits into the infrastructure of a surgical navigation system. We have assessed the extent to which stereoscopic visualization improves target localization in computer-aided surgery in a phantom study. In order to quantify the depth perception of a user aiming at a given target, we have designed a phantom simulating typical clinical situations in skull base surgery. Sixteen steel spheres were fixed at the base of a bony skull, and several typical craniotomies were applied. After having taken CT scans, the skull was filled with opaque jelly in order to simulate brain tissue. The positions of the spheres were registered using VISIT, a system for computer-aided surgical navigation. Then attempts were made to locate the steel spheres with a bayonet probe through the craniotomies using VISIT and the Varioscope AR as a stereoscopic display device. Localization of targets 4 mm in diameter using stereoscopic vision and additional visual cues indicating target proximity had a success rate (defined as a first-trial hit rate) of 87.5%. Using monoscopic vision and target proximity indication, the success rate was found to be 66.6%. Omission of visual hints on reaching a target yielded a success rate of 79.2% in the stereo case and 56.25% with monoscopic vision. Time requirements for localizing all 16 targets ranged from 7.5 min (stereo, with proximity cues) to 10 min (mono, without proximity cues). Navigation error is primarily governed by the accuracy of registration in the navigation system, whereas the HMD does not appear to influence localization significantly. We conclude that stereo vision is a valuable tool in augmented reality guided interventions.
Collapse
|
93
|
Yamaguchi T, Nakamoto M, Sato Y, Nakajima Y, Konishi K, Hashizume M, Nishii T, Sugano N, Yoshikawa H, Yonenobu K, Tamura S. Camera Model and Calibration Procedure for Oblique-Viewing Endoscope. LECTURE NOTES IN COMPUTER SCIENCE 2003. [DOI: 10.1007/978-3-540-39903-2_46] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
94
|
|
95
|
Wanschitz F, Birkfellner W, Figl M, Patruta S, Wagner A, Watzinger F, Yerit K, Schicho K, Hanel R, Kainberger F, Imhof H, Bergmann H, Ewers R. Computer-enhanced stereoscopic vision in a head-mounted display for oral implant surgery. Clin Oral Implants Res 2002; 13:610-6. [PMID: 12519335 DOI: 10.1034/j.1600-0501.2002.130606.x] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
We developed a head-mounted display (HMD) with integrated computer-generated stereoscopic projection of target structures and integrated it into visit, a specific oral implant planning and navigation software. The HMD is equipped with two miniature computer monitors that project computer-generated graphics stereoscopically into the optical path. Its position is tracked by the navigation system's optical tracker and target structures are displayed in their true position over the operation site. In order to test this system's accuracy and spatial perception of the viewer, five interforaminal implants in three dry human mandibles were planned with visit and executed using the stereoscopic projection through the HMD. The deviation between planned and achieved position of the implants was measured on corresponding computed tomography (CT) scan images recorded post-operatively. The deviation between planned and achieved implant position at the jaw crest was 0.57 +/- 0.49 mm measured from the lingual, and 0.58 +/- 0.4 mm measured from the buccal cortex. At the tip of the implants the deviation was 0.77 +/- 0.63 mm at the lingual and 0.79 +/- 0.71 mm at the buccal cortex. The mean angular deviation between planned and executed implant position was 3.55 +/- 2.07 degrees. The present in vitro experiment indicates that the concept of preoperative planning and transfer to the operative field by an HMD allows us to achieve an average precision within 1 mm (range up to 3 mm) of the implant position and within 3 degrees deviation for the implant inclination (range up to 10 degrees ). Control during the drilling procedure is significantly improved by stereoscopic vision through the HMD resulting in a more accurate inclination of the implants.
Collapse
Affiliation(s)
- Felix Wanschitz
- Department of Oral and Maxillofacial Surgery, University of Vienna, Medical School, General Hospital,Vienna, Austria.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
96
|
|
97
|
Shahidi R, Bax MR, Maurer CR, Johnson JA, Wilkinson EP, Wang B, West JB, Citardi MJ, Manwaring KH, Khadem R. Implementation, calibration and accuracy testing of an image-enhanced endoscopy system. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:1524-1535. [PMID: 12588036 DOI: 10.1109/tmi.2002.806597] [Citation(s) in RCA: 78] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This paper presents a new method for image-guided surgery called image-enhanced endoscopy. Registered real and virtual endoscopic images (perspective volume renderings generated from the same view as the endoscope camera using a preoperative image) are displayed simultaneously; when combined with the ability to vary tissue transparency in the virtual images, this provides surgeons with the ability to see beyond visible surfaces and, thus, provides additional exposure during surgery. A mount with four photoreflective spheres is rigidly attached to the endoscope and its position and orientation is tracked using an optical position sensor. Generation of virtual images that are accurately registered to the real endoscopic images requires calibration of the tracked endoscope. The calibration process determines intrinsic parameters (that represent the projection of three-dimensional points onto the two-dimensional endoscope camera imaging plane) and extrinsic parameters (that represent the transformation from the coordinate system of the tracker mount attached to the endoscope to the coordinate system of the endoscope camera), and determines radial lens distortion. The calibration routine is fast, automatic, accurate and reliable, and is insensitive to rotational orientation of the endoscope. The routine automatically detects, localizes, and identifies dots in a video image snapshot of the calibration target grid and determines the calibration parameters from the sets of known physical coordinates and localized image coordinates of the target grid dots. Using nonlinear lens-distortion correction, which can be performed at real-time rates (30 frames per second), the mean projection error is less than 0.5 mm at distances up to 25 mm from the endoscope tip, and less than 1.0 mm up to 45 mm. Experimental measurements and point-based registration error theory show that the tracking error is about 0.5-0.7 mm at the tip of the endoscope and less than 0.9 mm for all points in the field of view of the endoscope camera at a distance of up to 65 mm from the tip. It is probable that much of the projection error is due to endoscope tracking error rather than calibration error. Two examples of clinical applications are presented to illustrate the usefulness of image-enhanced endoscopy. This method is a useful addition to conventional image-guidance systems, which generally show only the position of the tip (and sometimes the orientation) of a surgical instrument or probe on reformatted image slices.
Collapse
Affiliation(s)
- Ramin Shahidi
- Image Guidance Laboratories, Department of Neurosurgery, Stanford University, 300 Pasteur Drive, Room S-012, MC 5327, Stanford, CA 94305-5327, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
98
|
Jannin P, Fitzpatrick JM, Hawkes DJ, Pennec X, Shahidi R, Vannier MW. Validation of medical image processing in image-guided therapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:1445-1449. [PMID: 12588028 DOI: 10.1109/tmi.2002.806568] [Citation(s) in RCA: 84] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
|
99
|
Nakamoto M, Sato Y, Miyamoto M, Nakamjima Y, Konishi K, Shimada M, Hashizume M, Tamura S. 3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery. ACTA ACUST UNITED AC 2002. [DOI: 10.1007/3-540-45787-9_19] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
100
|
Tamura S, Hirano M, Chen X, Sato Y, Narumi Y, Hori M, Takahashi S, Nakamura H. Intrabody three-dimensional position sensor for an ultrasound endoscope. IEEE Trans Biomed Eng 2002; 49:1187-94. [PMID: 12374344 DOI: 10.1109/tbme.2002.803517] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
To avoid or reduce the X-ray exposure in endoscopic examinations and therapy, as an alternative to the conventional two-dimensional X-ray fluoroscopy we are developing an intrabody navigation system that can directly measure and visualize the three-dimensional (3-D) position of the tip and the trace of an ultrasound endoscope. The proposed system can identify the 3-D location and direction of the endoscope probe inserted into the body to furnish endoscopic images. A marker transducer(s) placed on the surface of the body transmits ultrasound pulses, which are visualized as a marker synchronized to the scanning of the endoscope. The position (direction and distance of the marker transducer(s) outside the body relative to the scanning probe inside the body) of the marker is detected and measured in the scanned image of the ultrasound endoscope. Further, an optical localizer locates the marker transducer(s) with six degrees of freedom. Thus, the proposed method performs inside-body 3-D localization by utilizing the inherent image reconstruction function of the ultrasound endoscope, and is able to be used with currently available commercial ultrasound image scanners. The system may be envisaged as a kind of global positioning system for intrabody navigation.
Collapse
Affiliation(s)
- Shinichi Tamura
- Division of Interdisciplinary Image Analysis, Osaka University Medical School, Suita City, Japan.
| | | | | | | | | | | | | | | |
Collapse
|