1
|
He Z, Xu G, Zhang G, Wang Z, Sun J, Li W, Liu D, Tian Y, Huang W, Cai D. Computed tomography and structured light imaging guided orthopedic navigation puncture system: effective reduction of intraoperative image drift and mismatch. Front Surg 2024; 11:1476245. [PMID: 39450295 PMCID: PMC11499228 DOI: 10.3389/fsurg.2024.1476245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Accepted: 09/23/2024] [Indexed: 10/26/2024] Open
Abstract
Background Image-guided surgical navigation systems are widely regarded as the benchmark for computer-assisted surgical robotic platforms, yet a persistent challenge remains in addressing intraoperative image drift and mismatch. It can significantly impact the accuracy and precision of surgical procedures. Therefore, further research and development are necessary to mitigate this issue and enhance the overall performance of these advanced surgical platforms. Objective The primary objective is to improve the precision of image guided puncture navigation systems by developing a computed tomography (CT) and structured light imaging (SLI) based navigation system. Furthermore, we also aim to quantifying and visualize intraoperative image drift and mismatch in real time and provide feedback to surgeons, ensuring that surgical procedures are executed with accuracy and reliability. Methods A CT-SLI guided orthopedic navigation puncture system was developed. Polymer bandages are employed to pressurize, plasticize, immobilize and toughen the surface of a specimen for surgical operations. Preoperative CT images of the specimen are acquired, a 3D navigation map is reconstructed and a puncture path planned accordingly. During surgery, an SLI module captures and reconstructs the 3D surfaces of both the specimen and a guiding tube for the puncture needle. The SLI reconstructed 3D surface of the specimen is matched to the CT navigation map via two-step point cloud registrations, while the SLI reconstructed 3D surface of the guiding tube is fitted by a cylindrical model, which is in turn aligned with the planned puncture path. The proposed system has been tested and evaluated using 20 formalin-soaked lower limb cadaver specimens preserved at a local hospital. Results The proposed method achieved image registration RMS errors of 0.576 ± 0.146 mm and 0.407 ± 0.234 mm between preoperative CT and intraoperative SLI surface models and between preoperative and postoperative CT surface models. In addition, preoperative and postoperative specimen surface and skeletal drifts were 0.033 ± 0.272 mm and 0.235 ± 0.197 mm respectively. Conclusion The results indicate that the proposed method is effective in reducing intraoperative image drift and mismatch. The system also visualizes intraoperative image drift and mismatch, and provides real time visual feedback to surgeons.
Collapse
Affiliation(s)
- Zaopeng He
- The Third Affiliated Hospital and Third School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Lecong Hospital of Shunde, Foshan, China
| | - Guanghua Xu
- Lecong Hospital of Shunde, Foshan, China
- Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Guangdong Provincial Key Laboratory of Medical Biomechanics, National Key Discipline of Human Anatomy and School of Basic Medical Sciences, Southern Medical University, Guangzhou, China
| | - Guodong Zhang
- Department of Orthopedics, Affiliated Hospital of Putian University, Putian, China
| | - Zeyu Wang
- School of Basic Medical Sciences, Yanbian University, Yanbian, China
| | | | - Wei Li
- Lecong Hospital of Shunde, Foshan, China
| | - Dongbo Liu
- Lecong Hospital of Shunde, Foshan, China
| | - Yibin Tian
- College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, China
| | - Wenhua Huang
- The Third Affiliated Hospital and Third School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Guangdong Provincial Key Laboratory of Medical Biomechanics, National Key Discipline of Human Anatomy and School of Basic Medical Sciences, Southern Medical University, Guangzhou, China
| | - Daozhang Cai
- The Third Affiliated Hospital and Third School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Orthopedic Hospital of Guangdong Province, Academy of Orthopedics Guangdong Province, Guangzhou, China
| |
Collapse
|
2
|
Liebmann F, von Atzigen M, Stütz D, Wolf J, Zingg L, Suter D, Cavalcanti NA, Leoty L, Esfandiari H, Snedeker JG, Oswald MR, Pollefeys M, Farshad M, Fürnstahl P. Automatic registration with continuous pose updates for marker-less surgical navigation in spine surgery. Med Image Anal 2024; 91:103027. [PMID: 37992494 DOI: 10.1016/j.media.2023.103027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 10/29/2023] [Accepted: 11/09/2023] [Indexed: 11/24/2023]
Abstract
Established surgical navigation systems for pedicle screw placement have been proven to be accurate, but still reveal limitations in registration or surgical guidance. Registration of preoperative data to the intraoperative anatomy remains a time-consuming, error-prone task that includes exposure to harmful radiation. Surgical guidance through conventional displays has well-known drawbacks, as information cannot be presented in-situ and from the surgeon's perspective. Consequently, radiation-free and more automatic registration methods with subsequent surgeon-centric navigation feedback are desirable. In this work, we present a marker-less approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner. A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models, which then is refined for each vertebra individually and updated in real-time with GPU acceleration while handling surgeon occlusions. An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system. The registration method was verified on a public dataset with a median of 100% successful registrations, a median target registration error of 2.7 mm, a median screw trajectory error of 1.6°and a median screw entry point error of 2.3 mm. Additionally, the whole pipeline was validated in an ex-vivo surgery, yielding a 100% screw accuracy and a median target registration error of 1.0 mm. Our results meet clinical demands and emphasize the potential of RGB-D data for fully automatic registration approaches in combination with augmented reality guidance.
Collapse
Affiliation(s)
- Florentin Liebmann
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland; Laboratory for Orthopaedic Biomechanics, ETH Zurich, Zurich, Switzerland.
| | - Marco von Atzigen
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland; Laboratory for Orthopaedic Biomechanics, ETH Zurich, Zurich, Switzerland
| | - Dominik Stütz
- Computer Vision and Geometry Group, ETH Zurich, Zurich, Switzerland
| | - Julian Wolf
- Product Development Group, ETH Zurich, Zurich, Switzerland
| | - Lukas Zingg
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Daniel Suter
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Nicola A Cavalcanti
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland; Department of Orthopedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Laura Leoty
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Jess G Snedeker
- Laboratory for Orthopaedic Biomechanics, ETH Zurich, Zurich, Switzerland
| | - Martin R Oswald
- Computer Vision and Geometry Group, ETH Zurich, Zurich, Switzerland; Computer Vision Lab, University of Amsterdam, Amsterdam, Netherlands
| | - Marc Pollefeys
- Computer Vision and Geometry Group, ETH Zurich, Zurich, Switzerland; Microsoft Mixed Reality and AI Zurich Lab, Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Kwon KH, Kim MY. Robust H-K Curvature Map Matching for Patient-to-CT Registration in Neurosurgical Navigation Systems. SENSORS (BASEL, SWITZERLAND) 2023; 23:4903. [PMID: 37430817 DOI: 10.3390/s23104903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/16/2023] [Accepted: 05/18/2023] [Indexed: 07/12/2023]
Abstract
Image-to-patient registration is a coordinate system matching process between real patients and medical images to actively utilize medical images such as computed tomography (CT) during surgery. This paper mainly deals with a markerless method utilizing scan data of patients and 3D data from CT images. The 3D surface data of the patient are registered to CT data using computer-based optimization methods such as iterative closest point (ICP) algorithms. However, if a proper initial location is not set up, the conventional ICP algorithm has the disadvantages that it takes a long converging time and also suffers from the local minimum problem during the process. We propose an automatic and robust 3D data registration method that can accurately find a proper initial location for the ICP algorithm using curvature matching. The proposed method finds and extracts the matching area for 3D registration by converting 3D CT data and 3D scan data to 2D curvature images and by performing curvature matching between them. Curvature features have characteristics that are robust to translation, rotation, and even some deformation. The proposed image-to-patient registration is implemented with the precise 3D registration of the extracted partial 3D CT data and the patient's scan data using the ICP algorithm.
Collapse
Affiliation(s)
- Ki Hoon Kwon
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Min Young Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
- Research Center for Neurosurgical Robotic System, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
4
|
Fan X, Zhu Q, Tu P, Joskowicz L, Chen X. A review of advances in image-guided orthopedic surgery. Phys Med Biol 2023; 68. [PMID: 36595258 DOI: 10.1088/1361-6560/acaae9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022]
Abstract
Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.
Collapse
Affiliation(s)
- Xingqi Fan
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Qiyang Zhu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
5
|
Koike T, Kin T, Tanaka S, Sato K, Uchida T, Takeda Y, Uchikawa H, Kiyofuji S, Saito T, Takami H, Takayanagi S, Mukasa A, Oyama H, Saito N. Development of a New Image-Guided Neuronavigation System: Mixed-Reality Projection Mapping Is Accurate and Feasible. Oper Neurosurg (Hagerstown) 2021; 21:549-557. [PMID: 34634817 DOI: 10.1093/ons/opab353] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 08/02/2021] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Image-guided systems improve the safety, functional outcome, and overall survival of neurosurgery but require extensive equipment. OBJECTIVE To develop an image-guided surgery system that combines the brain surface photographic texture (BSP-T) captured during surgery with 3-dimensional computer graphics (3DCG) using projection mapping. METHODS Patients who underwent initial surgery with brain tumors were prospectively enrolled. The texture of the 3DCG (3DCG-T) was obtained from 3DCG under similar conditions as those when capturing the brain surface photographs. The position and orientation at the time of 3DCG-T acquisition were used as the reference. The correct position and orientation of the BSP-T were obtained by aligning the BSP-T with the 3DCG-T using normalized mutual information. The BSP-T was combined with and displayed on the 3DCG using projection mapping. This mixed-reality projection mapping (MRPM) was used prospectively in 15 patients (mean age 46.6 yr, 6 males). The difference between the centerlines of surface blood vessels on the BSP-T and 3DCG constituted the target registration error (TRE) and was measured in 16 fields of the craniotomy area. We also measured the time required for image processing. RESULTS The TRE was measured at 158 locations in the 15 patients, with an average of 1.19 ± 0.14 mm (mean ± standard error). The average image processing time was 16.58 min. CONCLUSION Our MRPM method does not require extensive equipment while presenting information of patients' anatomy together with medical images in the same coordinate system. It has the potential to improve patient safety.
Collapse
Affiliation(s)
- Tsukasa Koike
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | - Taichi Kin
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | - Shota Tanaka
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | - Katsuya Sato
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | - Tatsuya Uchida
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | - Yasuhiro Takeda
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | - Hiroki Uchikawa
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | - Satoshi Kiyofuji
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | - Toki Saito
- Department of Clinical Information Engineering, The University of Tokyo, Tokyo, Japan
| | - Hirokazu Takami
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| | | | - Akitake Mukasa
- Department of Neurosurgery, Kumamoto University, Kumamoto, Japan
| | - Hiroshi Oyama
- Department of Clinical Information Engineering, The University of Tokyo, Tokyo, Japan
| | - Nobuhito Saito
- Department of Neurosurgery, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
6
|
Liebmann F, Stütz D, Suter D, Jecklin S, Snedeker JG, Farshad M, Fürnstahl P, Esfandiari H. SpineDepth: A Multi-Modal Data Collection Approach for Automatic Labelling and Intraoperative Spinal Shape Reconstruction Based on RGB-D Data. J Imaging 2021; 7:164. [PMID: 34460800 PMCID: PMC8471818 DOI: 10.3390/jimaging7090164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/22/2021] [Accepted: 08/24/2021] [Indexed: 11/21/2022] Open
Abstract
Computer aided orthopedic surgery suffers from low clinical adoption, despite increased accuracy and patient safety. This can partly be attributed to cumbersome and often radiation intensive registration methods. Emerging RGB-D sensors combined with artificial intelligence data-driven methods have the potential to streamline these procedures. However, developing such methods requires vast amount of data. To this end, a multi-modal approach that enables acquisition of large clinical data, tailored to pedicle screw placement, using RGB-D sensors and a co-calibrated high-end optical tracking system was developed. The resulting dataset comprises RGB-D recordings of pedicle screw placement along with individually tracked ground truth poses and shapes of spine levels L1-L5 from ten cadaveric specimens. Besides a detailed description of our setup, quantitative and qualitative outcome measures are provided. We found a mean target registration error of 1.5 mm. The median deviation between measured and ground truth bone surface was 2.4 mm. In addition, a surgeon rated the overall alignment based on 10% random samples as 5.8 on a scale from 1 to 6. Generation of labeled RGB-D data for orthopedic interventions with satisfactory accuracy is feasible, and its publication shall promote future development of data-driven artificial intelligence methods for fast and reliable intraoperative registration.
Collapse
Affiliation(s)
- Florentin Liebmann
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland; (D.S.); (D.S.); (S.J.); (P.F.); (H.E.)
- Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland;
| | - Dominik Stütz
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland; (D.S.); (D.S.); (S.J.); (P.F.); (H.E.)
- Computer Vision and Geometry Group, ETH Zurich, 8093 Zurich, Switzerland
| | - Daniel Suter
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland; (D.S.); (D.S.); (S.J.); (P.F.); (H.E.)
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland;
| | - Sascha Jecklin
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland; (D.S.); (D.S.); (S.J.); (P.F.); (H.E.)
| | - Jess G. Snedeker
- Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland;
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland;
| | - Mazda Farshad
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland;
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland; (D.S.); (D.S.); (S.J.); (P.F.); (H.E.)
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland; (D.S.); (D.S.); (S.J.); (P.F.); (H.E.)
| |
Collapse
|
7
|
Cai Y, Wu S, Fan X, Olson J, Evans L, Lollis S, Mirza SK, Paulsen KD, Ji S. A level-wise spine registration framework to account for large pose changes. Int J Comput Assist Radiol Surg 2021; 16:943-953. [PMID: 33973113 PMCID: PMC8358825 DOI: 10.1007/s11548-021-02395-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 04/29/2021] [Indexed: 11/27/2022]
Abstract
PURPOSES Accurate and efficient spine registration is crucial to success of spine image guidance. However, changes in spine pose cause intervertebral motion that can lead to significant registration errors. In this study, we develop a geometrical rectification technique via nonlinear principal component analysis (NLPCA) to achieve level-wise vertebral registration that is robust to large changes in spine pose. METHODS We used explanted porcine spines and live pigs to develop and test our technique. Each sample was scanned with preoperative CT (pCT) in an initial pose and rescanned with intraoperative stereovision (iSV) in a different surgical posture. Patient registration rectified arbitrary spinal postures in pCT and iSV into a common, neutral pose through a parameterized moving-frame approach. Topologically encoded depth projection 2D images were then generated to establish invertible point-to-pixel correspondences. Level-wise point correspondences between pCT and iSV vertebral surfaces were generated via 2D image registration. Finally, closed-form vertebral level-wise rigid registration was obtained by directly mapping 3D surface point pairs. Implanted mini-screws were used as fiducial markers to measure registration accuracy. RESULTS In seven explanted porcine spines and two live animal surgeries (maximum in-spine pose change of 87.5 mm and 32.7 degrees averaged from all spines), average target registration errors (TRE) of 1.70 ± 0.15 mm and 1.85 ± 0.16 mm were achieved, respectively. The automated spine rectification took 3-5 min, followed by an additional 30 secs for depth image projection and level-wise registration. CONCLUSIONS Accuracy and efficiency of the proposed level-wise spine registration support its application in human open spine surgeries. The registration framework, itself, may also be applicable to other intraoperative imaging modalities such as ultrasound and MRI, which may expand utility of the approach in spine registration in general.
Collapse
Affiliation(s)
- Yunliang Cai
- Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA, 01609, USA
| | - Shaoju Wu
- Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA, 01609, USA
| | - Xiaoyao Fan
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Jonathan Olson
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Linton Evans
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Scott Lollis
- University of Vermont Medical Center, Burlington, VT, 05401, USA
| | - Sohail K Mirza
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Keith D Paulsen
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Songbai Ji
- Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA, 01609, USA.
| |
Collapse
|
8
|
Li W, Fan J, Li S, Tian Z, Zheng Z, Ai D, Song H, Yang J. Calibrating 3D Scanner in the Coordinate System of Optical Tracker for Image-To-Patient Registration. Front Neurorobot 2021; 15:636772. [PMID: 34054454 PMCID: PMC8160243 DOI: 10.3389/fnbot.2021.636772] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Three-dimensional scanners have been widely applied in image-guided surgery (IGS) given its potential to solve the image-to-patient registration problem. How to perform a reliable calibration between a 3D scanner and an external tracker is especially important for these applications. This study proposes a novel method for calibrating the extrinsic parameters of a 3D scanner in the coordinate system of an optical tracker. We bound an optical marker to a 3D scanner and designed a specified 3D benchmark for calibration. We then proposed a two-step calibration method based on the pointset registration technique and nonlinear optimization algorithm to obtain the extrinsic matrix of the 3D scanner. We applied repeat scan registration error (RSRE) as the cost function in the optimization process. Subsequently, we evaluated the performance of the proposed method on a recaptured verification dataset through RSRE and Chamfer distance (CD). In comparison with the calibration method based on 2D checkerboard, the proposed method achieved a lower RSRE (1.73 mm vs. 2.10, 1.94, and 1.83 mm) and CD (2.83 mm vs. 3.98, 3.46, and 3.17 mm). We also constructed a surgical navigation system to further explore the application of the tracked 3D scanner in image-to-patient registration. We conducted a phantom study to verify the accuracy of the proposed method and analyze the relationship between the calibration accuracy and the target registration error (TRE). The proposed scanner-based image-to-patient registration method was also compared with the fiducial-based method, and TRE and operation time (OT) were used to evaluate the registration results. The proposed registration method achieved an improved registration efficiency (50.72 ± 6.04 vs. 212.97 ± 15.91 s in the head phantom study). Although the TRE of the proposed registration method met the clinical requirements, its accuracy was lower than that of the fiducial-based registration method (1.79 ± 0.17 mm vs. 0.92 ± 0.16 mm in the head phantom study). We summarized and analyzed the limitations of the scanner-based image-to-patient registration method and discussed its possible development.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
9
|
Gueziri HE, Yan CXB, Collins DL. Open-source software for ultrasound-based guidance in spinal fusion surgery. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:3353-3368. [PMID: 32907772 DOI: 10.1016/j.ultrasmedbio.2020.08.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 07/10/2020] [Accepted: 08/05/2020] [Indexed: 06/11/2023]
Abstract
Spinal instrumentation and surgical manipulations may cause loss of navigation accuracy requiring an efficient re-alignment of the patient anatomy with pre-operative images during surgery. While intra-operative ultrasound (iUS) guidance has shown clear potential to reduce surgery time, compared with clinical computed tomography (CT) guidance, rapid registration aiming to correct for patient misalignment has not been addressed. In this article, we present an open-source platform for pedicle screw navigation using iUS imaging. The alignment method is based on rigid registration of CT to iUS vertebral images and has been designed for fast and fully automatic patient re-alignment in the operating room. Two steps are involved: first, we use the iUS probe's trajectory to achieve an initial coarse registration; then, the registration transform is refined by simultaneously optimizing gradient orientation alignment and mean of iUS intensities passing through the CT-defined posterior surface of the vertebra. We evaluated our approach on a lumbosacral section of a porcine cadaver with seven vertebral levels. We achieved a median target registration error of 1.47 mm (100% success rate, defined by a target registration error <2 mm) when applying the probe's trajectory initial alignment. The approach exhibited high robustness to partial visibility of the vertebra with success rates of 89.86% and 88.57% when missing either the left or right part of the vertebra and robustness to initial misalignments with a success rate of 83.14% for random starts within ±20° rotation and ±20 mm translation. Our graphics processing unit implementation achieves an efficient registration time under 8 s, which makes the approach suitable for clinical application.
Collapse
Affiliation(s)
- Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada.
| | - Charles X B Yan
- Joint Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - D Louis Collins
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
10
|
Gueziri HE, Santaguida C, Collins DL. The state-of-the-art in ultrasound-guided spine interventions. Med Image Anal 2020; 65:101769. [PMID: 32668375 DOI: 10.1016/j.media.2020.101769] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 06/23/2020] [Accepted: 06/25/2020] [Indexed: 02/07/2023]
Abstract
During the last two decades, intra-operative ultrasound (iUS) imaging has been employed for various surgical procedures of the spine, including spinal fusion and needle injections. Accurate and efficient registration of pre-operative computed tomography or magnetic resonance images with iUS images are key elements in the success of iUS-based spine navigation. While widely investigated in research, iUS-based spine navigation has not yet been established in the clinic. This is due to several factors including the lack of a standard methodology for the assessment of accuracy, robustness, reliability, and usability of the registration method. To address these issues, we present a systematic review of the state-of-the-art techniques for iUS-guided registration in spinal image-guided surgery (IGS). The review follows a new taxonomy based on the four steps involved in the surgical workflow that include pre-processing, registration initialization, estimation of the required patient to image transformation, and a visualization process. We provide a detailed analysis of the measurements in terms of accuracy, robustness, reliability, and usability that need to be met during the evaluation of a spinal IGS framework. Although this review is focused on spinal navigation, we expect similar evaluation criteria to be relevant for other IGS applications.
Collapse
Affiliation(s)
- Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal (QC), Canada; McGill University, Montreal (QC), Canada.
| | - Carlo Santaguida
- Department of Neurology and Neurosurgery, McGill University Health Center, Montreal (QC), Canada
| | - D Louis Collins
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal (QC), Canada; McGill University, Montreal (QC), Canada
| |
Collapse
|
11
|
Lee S, Shim S, Ha HG, Lee H, Hong J. Simultaneous Optimization of Patient-Image Registration and Hand-Eye Calibration for Accurate Augmented Reality in Surgery. IEEE Trans Biomed Eng 2020; 67:2669-2682. [PMID: 31976878 DOI: 10.1109/tbme.2020.2967802] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Augmented reality (AR) navigation using a position sensor in endoscopic surgeries relies on the quality of patient-image registration and hand-eye calibration. Conventional methods collect the necessary data to compute two output transformation matrices separately. However, the AR display setting during surgery generally differs from that during preoperative processes. Although conventional methods can identify optimal solutions under initial conditions, AR display errors are unavoidable during surgery owing to the inherent computational complexity of AR processes, such as error accumulation over successive matrix multiplications, and tracking errors of position sensor. METHODS We propose the simultaneous optimization of patient-image registration and hand-eye calibration in an AR environment before surgery. The relationship between the endoscope and a virtual object to overlay is first calculated using an endoscopic image, which also functions as a reference during optimization. After including the tracking information from the position sensor, patient-image registration and hand-eye calibration are optimized in terms of least-squares. RESULTS Experiments with synthetic data verify that the proposed method is less sensitive to computation and tracking errors. A phantom experiment with a position sensor is also conducted. The accuracy of the proposed method is significantly higher than that of the conventional method. CONCLUSION The AR accuracy of the proposed method is compared with those of the conventional ones, and the superiority of the proposed method is verified. SIGNIFICANCE This study demonstrates that the proposed method exhibits substantial potential for improving AR navigation accuracy.
Collapse
|
12
|
Evans L, Olson JD, Cai Y, Fan X, Paulsen KD, Roberts DW, Ji S, Lollis SS. Stereovision Co-Registration in Image-Guided Spinal Surgery: Accuracy Assessment Using Explanted Porcine Spines. Oper Neurosurg (Hagerstown) 2019. [PMID: 29518246 DOI: 10.1093/ons/opy023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Current methods of spine registration for image guidance have a variety of limitations related to accuracy, efficiency, and cost. OBJECTIVE To define the accuracy of stereovision-mediated co-registration of a spinal surgical field. METHODS A total of 10 explanted porcine spines were used. Dorsal soft tissue was removed to a variable degree. Bone screw fiducials were placed in each spine and high-resolution computed tomography (CT) scanning performed. Stereoscopic images were then obtained using a tracked, calibrated stereoscopic camera system; images were processed, reconstructed, and segmented in a semi-automated manner. A multistart registration of the reconstructed spinal surface with preoperative CT was performed. Target registration error (TRE) in the region of the laminae and facets was then determined, using bone screw fiducials not included in the original registration process. Each spine also underwent multilevel laminectomy, and TRE was then recalculated for varying amounts of bone removal. RESULTS The mean TRE of stereovision registration was 2.19 ± 0.69 mm when all soft tissue was removed and 2.49 ± 0.74 mm when limited soft tissue removal was performed. Accuracy of the registration process was not adversely affected by laminectomy. CONCLUSION Stereovision offers a promising means of registering an open, dorsal spinal surgical field. In this study, overall mean accuracy of the registration was 2.21 mm, even when bony anatomy was partially obscured by soft tissue or when partial midline laminectomy had been performed.
Collapse
Affiliation(s)
- Linton Evans
- Section of Neurosurgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Jonathan D Olson
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Yunliang Cai
- Worcester Polytechnic Institute, Worcester, Massachusetts
| | - Xiaoyao Fan
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Keith D Paulsen
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - David W Roberts
- Section of Neurosurgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire.,Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Songbai Ji
- Worcester Polytechnic Institute, Worcester, Massachusetts
| | - S Scott Lollis
- Division of Neurosurgery, University of Vermont Medical Center, Burlington, Vermont
| |
Collapse
|
13
|
Lollis SS, Fan X, Evans L, Olson JD, Paulsen KD, Roberts DW, Mirza SK, Ji S. Use of Stereovision for Intraoperative Coregistration of a Spinal Surgical Field: A Human Feasibility Study. Oper Neurosurg (Hagerstown) 2019; 14:29-35. [PMID: 28658939 DOI: 10.1093/ons/opx132] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 06/14/2017] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND The use of image guidance during spinal surgery has been limited by several anatomic factors such as intervertebral segment motion and ineffective spine immobilization. In its current form, the surgical field is coregistered with a preoperative computed tomography (CT), often obtained in a different spinal confirmation, or with intraoperative cross-sectional imaging. Stereovision offers an alternative method of registration. OBJECTIVE To demonstrate the feasibility of stereovision-mediated coregistration of a human spinal surgical field using a proof-of-principle study, and to provide preliminary assessments of the technique's accuracy. METHODS A total of 9 subjects undergoing image-guided pedicle screw placement also underwent stereovision-mediated coregistration with preoperative CT imaging. Stereoscopic images were acquired using a tracked, calibrated stereoscopic camera system mounted on an operating microscope. Images were processed, reconstructed, and segmented in a semi-automated manner. A multistart registration of the reconstructed spinal surface with preoperative CT was performed. Registration accuracy, measured as surface-to-surface distance error, was compared between stereovision registration and a standard registration. RESULTS The mean surface reconstruction error of the stereovision-acquired surface was 2.20 ± 0.89 mm. Intraoperative coregistration with stereovision was performed with a mean error of 1.48 ± 0.35 mm compared to 2.03 ± 0.28 mm using a standard point-based registration method. The average computational time for registration with stereovision was 95 ± 46 s (range 33-184 s) vs 10to 20 min for standard point-based registration. CONCLUSION Semi-automated registration of a spinal surgical field using stereovision is possible with accuracy that is at least comparable to current landmark-based techniques.
Collapse
Affiliation(s)
- S Scott Lollis
- Division of Neurosurgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Xiaoyao Fan
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - Linton Evans
- Division of Neurosurgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Jonathan D Olson
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - Keith D Paulsen
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - David W Roberts
- Division of Neurosurgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire.,Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - Sohail K Mirza
- Department of Orthopedic Surgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Songbai Ji
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| |
Collapse
|
14
|
Guha D, Yang VXD. Perspective review on applications of optics in spinal surgery. JOURNAL OF BIOMEDICAL OPTICS 2018; 23:1-8. [PMID: 29893070 DOI: 10.1117/1.jbo.23.6.060601] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Accepted: 05/23/2018] [Indexed: 06/08/2023]
Abstract
Optical technologies may be applied to multiple facets of spinal surgery from diagnostics to intraoperative image guidance to therapeutics. In diagnostics, the current standard remains cross-sectional static imaging. Optical surface scanning tools may have an important role; however, significant work is required to clearly correlate surface metrics to radiographic and clinically relevant spinal anatomy and alignment. In the realm of intraoperative image guidance, optical tracking is widely developed as the current standard of instrument tracking, however remains compromised by line-of-sight issues and more globally cumbersome registration workflows. Surface scanning registration tools are being refined to address concerns over workflow and learning curves, and allow real-time update of tissue deformation; however, the line-of-sight issues plaguing instrument tracking remain to be addressed. In therapeutics, optical applications exist in both visualization, in the form of endoscopes, and ablation, in the form of lasers. Further work is required to extend the feasibility of laser ablation to multiple tissues, including disc, bone, and tumor, in a safe and time-efficient manner. Finally, we postulate some of the short- and long-term opportunities for future growth of optical techniques in the context of spinal surgery. Particular emphasis is placed on intraoperative image guidance, the area of the authors' primary expertise.
Collapse
Affiliation(s)
- Daipayan Guha
- University of Toronto, Division of Neurosurgery, Toronto, Ontario, Canada
| | - Victor X D Yang
- University of Toronto, Division of Neurosurgery, Toronto, Ontario, Canada
- Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Ryerson University, Bioengineering and Biophotonics Laboratory, Toronto, Ontario, Canada
| |
Collapse
|
15
|
Afzali M, Ghaffari A, Fatemizadeh E, Soltanian-Zadeh H. Medical image registration using sparse coding of image patches. Comput Biol Med 2016; 73:56-70. [PMID: 27085311 DOI: 10.1016/j.compbiomed.2016.03.022] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2016] [Revised: 02/27/2016] [Accepted: 03/28/2016] [Indexed: 11/16/2022]
Abstract
Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing.
Collapse
Affiliation(s)
- Maryam Afzali
- Department of Electrical Engineering, Biomedical Signal and Image Processing Laboratory (BiSIPL), Sharif University of Technology, Tehran, Iran.
| | - Aboozar Ghaffari
- Department of Electrical Engineering, Biomedical Signal and Image Processing Laboratory (BiSIPL), Sharif University of Technology, Tehran, Iran.
| | - Emad Fatemizadeh
- Department of Electrical Engineering, Biomedical Signal and Image Processing Laboratory (BiSIPL), Sharif University of Technology, Tehran, Iran.
| | - Hamid Soltanian-Zadeh
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran; School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran; Image Analysis Laboratory, Departments of Radiology and Research Administration, Henry Ford Health System, Detroit, MI, USA.
| |
Collapse
|
16
|
Kassab GS, An G, Sander EA, Miga MI, Guccione JM, Ji S, Vodovotz Y. Augmenting Surgery via Multi-scale Modeling and Translational Systems Biology in the Era of Precision Medicine: A Multidisciplinary Perspective. Ann Biomed Eng 2016; 44:2611-25. [PMID: 27015816 DOI: 10.1007/s10439-016-1596-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2015] [Accepted: 03/18/2016] [Indexed: 12/18/2022]
Abstract
In this era of tremendous technological capabilities and increased focus on improving clinical outcomes, decreasing costs, and increasing precision, there is a need for a more quantitative approach to the field of surgery. Multiscale computational modeling has the potential to bridge the gap to the emerging paradigms of Precision Medicine and Translational Systems Biology, in which quantitative metrics and data guide patient care through improved stratification, diagnosis, and therapy. Achievements by multiple groups have demonstrated the potential for (1) multiscale computational modeling, at a biological level, of diseases treated with surgery and the surgical procedure process at the level of the individual and the population; along with (2) patient-specific, computationally-enabled surgical planning, delivery, and guidance and robotically-augmented manipulation. In this perspective article, we discuss these concepts, and cite emerging examples from the fields of trauma, wound healing, and cardiac surgery.
Collapse
Affiliation(s)
- Ghassan S Kassab
- California Medical Innovations Institute, San Diego, CA, 92121, USA
| | - Gary An
- Department of Surgery, University of Chicago, Chicago, IL, 60637, USA
| | - Edward A Sander
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, 52242, USA
| | - Michael I Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | - Julius M Guccione
- Department of Surgery, University of California, San Francisco, CA, 94143, USA
| | - Songbai Ji
- Thayer School of Engineering, Dartmouth College, Hanover, NH, 03755, USA.,Department of Surgery and of Orthopaedic Surgery, Geisel School of Medicine, Dartmouth College, Hanover, NH, 03755, USA
| | - Yoram Vodovotz
- Department of Surgery, University of Pittsburgh, W944 Starzl Biomedical Sciences Tower, 200 Lothrop St., Pittsburgh, PA, 15213, USA. .,Center for Inflammation and Regenerative Modeling, McGowan Institute for Regenerative Medicine, University of Pittsburgh, Pittsburgh, PA, 15219, USA.
| |
Collapse
|
17
|
Ji S, Fan X, Paulsen KD, Roberts DW, Mirza SK, Lollis SS. Intraoperative CT as a registration benchmark for intervertebral motion compensation in image-guided open spinal surgery. Int J Comput Assist Radiol Surg 2015; 10:2009-20. [PMID: 26194485 PMCID: PMC4734629 DOI: 10.1007/s11548-015-1255-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2015] [Accepted: 06/30/2015] [Indexed: 02/19/2023]
Abstract
PURPOSE An accurate and reliable benchmark of registration accuracy and intervertebral motion compensation is important for spinal image guidance. In this study, we evaluated the utility of intraoperative CT (iCT) in place of bone-implanted screws as the ground-truth registration and illustrated its use to benchmark the performance of intraoperative stereovision (iSV). METHODS A template-based, multi-body registration scheme was developed to individually segment and pair corresponding vertebrae between preoperative CT and iCT of the spine. Intervertebral motion was determined from the resulting vertebral pair-wise registrations. The accuracy of the image-driven registration was evaluated using surface-to-surface distance error (SDE) based on segmented bony features and was independently verified using point-to-point target registration error (TRE) computed from bone-implanted mini-screws. Both SDE and TRE were used to assess the compensation accuracy using iSV. RESULTS The iCT-based technique was evaluated on four explanted porcine spines (20 vertebral pairs) with artificially induced motion. We report a registration accuracy of 0.57 [Formula: see text] 0.32 mm (range 0.34-1.14 mm) and 0.29 [Formula: see text] 0.15 mm (range 0.14-0.78 mm) in SDE and TRE, respectively, for all vertebrae pooled, with an average intervertebral rotation of [Formula: see text] (range 1.5[Formula: see text]-7.9[Formula: see text]). The iSV-based compensation accuracy for one sample (four vertebrae) was 1.32 [Formula: see text] 0.19 mm and 1.72 [Formula: see text] 0.55 mm in SDE and TRE, respectively, exceeding the recommended accuracy of 2 mm. CONCLUSION This study demonstrates the effectiveness of iCT in place of invasive fiducials as a registration ground truth. These findings are important for future development of on-demand spinal image guidance using radiation-free images such as stereovision and ultrasound on human subjects.
Collapse
Affiliation(s)
- Songbai Ji
- Thayer School of Engineering, Dartmouth College, 14 Engineering Drive, Hanover, NH, 03755, USA.
- Geisel School of Medicine, Dartmouth College, Hanover, NH, 03755, USA.
| | - Xiaoyao Fan
- Thayer School of Engineering, Dartmouth College, 14 Engineering Drive, Hanover, NH, 03755, USA
| | - Keith D Paulsen
- Thayer School of Engineering, Dartmouth College, 14 Engineering Drive, Hanover, NH, 03755, USA
- Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766, USA
| | - David W Roberts
- Geisel School of Medicine, Dartmouth College, Hanover, NH, 03755, USA
- Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766, USA
| | - Sohail K Mirza
- Geisel School of Medicine, Dartmouth College, Hanover, NH, 03755, USA
- Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766, USA
| | - S Scott Lollis
- Geisel School of Medicine, Dartmouth College, Hanover, NH, 03755, USA
- Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766, USA
| |
Collapse
|