1
|
Riis TS, Lunt S, Kubanek J. MRI free targeting of deep brain structures based on facial landmarks. Brain Stimul 2025; 18:131-137. [PMID: 39755367 PMCID: PMC11910796 DOI: 10.1016/j.brs.2024.12.1478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 12/08/2024] [Accepted: 12/31/2024] [Indexed: 01/06/2025] Open
Abstract
Emerging neurostimulation methods aim to selectively modulate deep brain structures. Guiding these therapies has presented a substantial challenge, since imaging modalities such as MRI limit the spectrum of beneficiaries. In this study, we assess the guidance accuracy of a neuronavigation method that does not require taking MRI scans. The method is based on clearly identifiable anatomical landmarks of each subject's face. We compared this technique to the ideal case, MRI-based nonlinear brain registration, and evaluated the accuracy of both methods across ten targets located in deep brain structures: 7 targets in the anterior cingulate cortex as well as the anterior commissure and posterior commissure. Compared with the ideal case, the average localization error of the MRI-free method was 5.75 ± 2.98 mm (mean ± sd). These findings suggest that this method may provide sufficient compromise between practicality and the accuracy of targeting deep brain structures.
Collapse
Affiliation(s)
- Thomas S Riis
- Department of Biomedical Engineering, 36 S Wasatch Dr, 84112, Salt Lake City, UT, United States.
| | - Seth Lunt
- Department of Biomedical Engineering, 36 S Wasatch Dr, 84112, Salt Lake City, UT, United States
| | - Jan Kubanek
- Department of Biomedical Engineering, 36 S Wasatch Dr, 84112, Salt Lake City, UT, United States
| |
Collapse
|
2
|
Lee D, Choi A, Mun JH. Deep Learning-Based Fine-Tuning Approach of Coarse Registration for Ear-Nose-Throat (ENT) Surgical Navigation Systems. Bioengineering (Basel) 2024; 11:941. [PMID: 39329683 PMCID: PMC11428421 DOI: 10.3390/bioengineering11090941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/12/2024] [Accepted: 09/17/2024] [Indexed: 09/28/2024] Open
Abstract
Accurate registration between medical images and patient anatomy is crucial for surgical navigation systems in minimally invasive surgeries. This study introduces a novel deep learning-based refinement step to enhance the accuracy of surface registration without disrupting established workflows. The proposed method integrates a machine learning model between conventional coarse registration and ICP fine registration. A deep-learning model was trained using simulated anatomical landmarks with introduced localization errors. The model architecture features global feature-based learning, an iterative prediction structure, and independent processing of rotational and translational components. Validation with silicon-masked head phantoms and CT imaging compared the proposed method to both conventional registration and a recent deep-learning approach. The results demonstrated significant improvements in target registration error (TRE) across different facial regions and depths. The average TRE for the proposed method (1.58 ± 0.52 mm) was significantly lower than that of the conventional (2.37 ± 1.14 mm) and previous deep-learning (2.29 ± 0.95 mm) approaches (p < 0.01). The method showed a consistent performance across various facial regions and enhanced registration accuracy for deeper areas. This advancement could significantly enhance precision and safety in minimally invasive surgical procedures.
Collapse
Affiliation(s)
- Dongjun Lee
- Department of Biomechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Ahnryul Choi
- Department of Biomedical Engineering, College of Medicine, Chungbuk National Univeristy, Cheongju 28644, Republic of Korea
| | - Joung Hwan Mun
- Department of Biomechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
3
|
van der Woude R, Fitski M, van der Zee JM, van de Ven CP, Bökkerink GMJ, Wijnen MHWA, Meulstee JW, van Doormaal TPC, Siepel FJ, van der Steeg AFW. Clinical Application and Further Development of Augmented Reality Guidance for the Surgical Localization of Pediatric Chest Wall Tumors. J Pediatr Surg 2024; 59:1549-1555. [PMID: 38472040 DOI: 10.1016/j.jpedsurg.2024.02.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 02/16/2024] [Indexed: 03/14/2024]
Abstract
BACKGROUND Surgical treatment of pediatric chest wall tumors requires accurate surgical planning and tumor localization to achieve radical resections while sparing as much healthy tissue as possible. Augmented Reality (AR) could facilitate surgical decision making by improving anatomical understanding and intraoperative tumor localization. We present our clinical experience with the use of an AR system for intraoperative tumor localization during chest wall resections. Furthermore, we present the pre-clinical results of a new registration method to improve our conventional AR system. METHODS From January 2021, we used the HoloLens 2 for pre-incisional tumor localization during all chest wall resections inside our center. A patient-specific 3D model was projected onto the patient by use of a five-point registration method based on anatomical landmarks. Furthermore, we developed and pre-clinically tested a surface matching method to allow post-incisional AR guidance by performing registration on the exposed surface of the ribs. RESULTS Successful registration and holographic overlay were achieved in eight patients. The projection seemed most accurate when landmarks were positioned in a non-symmetric configuration in proximity to the tumor. Disagreements between the overlay and expected tumor location were mainly due to user-dependent registration errors. The pre-clinical tests of the surface matching method proved the feasibility of registration on the exposed ribs. CONCLUSIONS Our results prove the applicability of AR guidance for the pre- and post-incisional localization of pediatric chest wall tumors during surgery. The system has the potential to enable intraoperative 3D visualization, hereby facilitating surgical planning and management of chest wall resections. LEVEL OF EVIDENCE IV TYPE OF STUDY: Treatment Study.
Collapse
Affiliation(s)
- Rémi van der Woude
- Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584 CS, Utrecht, the Netherlands; Technical Medicine, TechMed Centre, University of Twente, Drienderlolaan 5, 7522 NB, Enschede, the Netherlands
| | - Matthijs Fitski
- Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584 CS, Utrecht, the Netherlands
| | - Jasper M van der Zee
- Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584 CS, Utrecht, the Netherlands; Technical Medicine, TechMed Centre, University of Twente, Drienderlolaan 5, 7522 NB, Enschede, the Netherlands
| | - Cornelis P van de Ven
- Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584 CS, Utrecht, the Netherlands
| | - Guus M J Bökkerink
- Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584 CS, Utrecht, the Netherlands
| | - Marc H W A Wijnen
- Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584 CS, Utrecht, the Netherlands
| | | | - Tristan P C van Doormaal
- Augmedit B.V., Naarden, the Netherlands; Department of Neurosurgery, Brain Division, University Medical Center, Utrecht, the Netherlands
| | - Françoise J Siepel
- Robotics and Mechatronics, TechMed Centre, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, the Netherlands
| | - Alida F W van der Steeg
- Princess Máxima Center for Pediatric Oncology, Heidelberglaan 25, 3584 CS, Utrecht, the Netherlands.
| |
Collapse
|
4
|
Li Z, Wang M. Rigid point cloud registration based on correspondence cloud for image-to-patient registration in image-guided surgery. Med Phys 2024; 51:4554-4566. [PMID: 38856158 DOI: 10.1002/mp.17243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/30/2024] [Accepted: 05/21/2024] [Indexed: 06/11/2024] Open
Abstract
BACKGROUND Image-to-patient registration aligns preoperative images to intra-operative anatomical structures and it is a critical step in image-guided surgery (IGS). The accuracy and speed of this step significantly influence the performance of IGS systems. Rigid registration based on paired points has been widely used in IGS, but studies have shown its limitations in terms of cost, accuracy, and registration time. Therefore, rigid registration of point clouds representing the human anatomical surfaces has become an alternative way for image-to-patient registration in the IGS systems. PURPOSE We propose a novel correspondence-based rigid point cloud registration method that can achieve global registration without the need for pose initialization. The proposed method is less sensitive to outliers compared to the widely used RANSAC-based registration methods and it achieves high accuracy at a high speed, which is particularly suitable for the image-to-patient registration in IGS. METHODS We use the rotation axis and angle to represent the rigid spatial transformation between two coordinate systems. Given a set of correspondences between two point clouds in two coordinate systems, we first construct a 3D correspondence cloud (CC) from the inlier correspondences and prove that the CC distributes on a plane, whose normal is the rotation axis between the two point clouds. Thus, the rotation axis can be estimated by fitting the CP. Then, we further show that when projecting the normals of a pair of corresponding points onto the CP, the angle between the projected normal pairs is equal to the rotation angle. Therefore, the rotation angle can be estimated from the angle histogram. Besides, this two-stage estimation also produces a high-quality correspondence subset with high inlier rate. With the estimated rotation axis, rotation angle, and the correspondence subset, the spatial transformation can be computed directly, or be estimated using RANSAC in a fast and robust way within only 100 iterations. RESULTS To validate the performance of the proposed registration method, we conducted experiments on the CT-Skull dataset. We first conducted a simulation experiment by controlling the initial inlier rate of the correspondence set, and the results showed that the proposed method can effectively obtain a correspondence subset with much higher inlier rate. We then compared our method with traditional approaches such as ICP, Go-ICP, and RANSAC, as well as recently proposed methods like TEASER, SC2-PCR, and MAC. Our method outperformed all traditional methods in terms of registration accuracy and speed. While achieving a registration accuracy comparable to the recently proposed methods, our method demonstrated superior speed, being almost three times faster than TEASER. CONCLUSIONS Experiments on the CT-Skull dataset demonstrate that the proposed method can effectively obtain a high-quality correspondence subset with high inlier rate, and a tiny RANSAC with 100 iterations is sufficient to estimate the optimal transformation for point cloud registration. Our method achieves higher registration accuracy and faster speed than existing widely used methods, demonstrating great potential for the image-to-patient registration, where a rigid spatial transformation is needed to align preoperative images to intra-operative patient anatomy.
Collapse
Affiliation(s)
- Zhihao Li
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China
| |
Collapse
|
5
|
Li W, Fan J, Li S, Zheng Z, Tian Z, Ai D, Song H, Chen X, Yang J. An incremental registration method for endoscopic sinus and skull base surgery navigation: From phantom study to clinical trials. Med Phys 2023; 50:226-239. [PMID: 35997999 DOI: 10.1002/mp.15941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 06/30/2022] [Accepted: 08/02/2022] [Indexed: 01/27/2023] Open
Abstract
PURPOSE Surface-based image-to-patient registration in current surgical navigation is mainly achieved by a 3D scanner, which has several limitations in clinical practice such as uncontrollable scanning range, complicated operation, and even high failure rate. An accurate, robust, and easy-to-perform image-to-patient registration method is urgently required. METHODS An incremental point cloud registration method was proposed for surface-based image-to-patient registration. The point cloud in image space was extracted from the computed tomography (CT) image, and a template matching method was applied to remove the redundant points. The corresponding point cloud in patient space was incrementally collected by an optically tracked pointer, while the nearest point distance (NPD) constraint was applied to ensure the uniformity of the collected points. A coarse-to-fine registration method under the constraints of coverage ratio (CR) and outliers ratio (OR) was then proposed to obtain the optimal rigid transformation from image to patient space. The proposed method was integrated in the recently developed endoscopic navigation system, and phantom study and clinical trials were conducted to evaluate the performance of the proposed method. RESULTS The results of the phantom study revealed that the proposed constraints greatly improved the accuracy and robustness of registration. The comparative experimental results revealed that the proposed registration method significantly outperform the scanner-based method, and achieved comparable accuracy to the fiducial-based method. In the clinical trials, the average registration duration was 1.24 ± 0.43 min, the target registration error (TRE) of 294 marker points (59 patients) was 1.25 ± 0.40 mm, and the lower 97.5% confidence limit of the success rate of positioning marker points exceeds the expected value (97.56% vs. 95.00%), revealed that the accuracy of the proposed method significantly met the clinical requirements (TRE ⩽ 2 mm, p < 0.05). CONCLUSIONS The proposed method has both the advantages of high accuracy and convenience, which were absent in the scanner-based method and the fiducial-based method. Our findings will help improve the quality of endoscopic sinus and skull base surgery.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Xiaohong Chen
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
6
|
"Image to patient" equal-resolution surface registration supported by a surface scanner: analysis of algorithm efficiency for computer-aided surgery. Int J Comput Assist Radiol Surg 2023; 18:319-328. [PMID: 35831549 PMCID: PMC9889449 DOI: 10.1007/s11548-022-02704-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 06/10/2022] [Indexed: 02/04/2023]
Abstract
PURPOSE The "image to patient" registration procedure is crucial for the accuracy of surgical instrument tracking relative to the medical image while computer-aided surgery. The main aim of this work was to create an equal-resolution surface registration algorithm (ERSR) and analyze its efficiency. METHODS The ERSR algorithm provides two datasets with equal, high resolution and approximately corresponding points. The registered sets are obtained by projection of a user-designed rectangle(s)-shaped uniform clouds of points on DICOM and surface scanner datasets. The tests of the algorithm were performed on a phantom with titanium microscrews. We analyzed the influence of DICOM resolution on the effect of the ERSR algorithm and compared the ERSR to standard paired-points landmark transform registration. The methods of analysis were Target Registration Error, distance maps, and their histogram evaluation. RESULTS The mean TRE in case of ERSR equaled 0.8 ± 0.3 mm (resolution A), 0.8 ± 0.5 mm (resolution B), and 1.0 ± 0.7 mm (resolution C). The mean values were at least 0.4 mm lower than in the case of landmark transform registration. The distance maps between the model achieved from the scanner and the CT-based model were analyzed by histogram. The frequency of the first bin in a histogram of the distance map for ERSR was about 0.6 for all three resolutions of DICOM dataset and three times higher than in the case of landmark transform registration. The results were statistically analyzed using the Wilcoxon signed-rank test (alpha = 0.05). CONCLUSION The tests proved a statistically significant higher efficiency of equal resolution surface registration related to the landmark transform algorithm. It was proven that the lower resolution of the CT DICOM dataset did not degrade the efficiency of the ERSR algorithm. We observed a significantly lower response to decreased resolution than in the case of paired-points landmark transform registration.
Collapse
|
7
|
Fu K, Chen X, Wang M. Global optimization point-set registration based on translation/rotation decoupling for image-guided surgery applications. Med Phys 2022; 49:7303-7315. [PMID: 35771730 DOI: 10.1002/mp.15839] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 05/11/2022] [Accepted: 06/17/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE In image-guided surgery systems, image-to-patient spatial registration is to get the spatial transformation between the image space and the actual operating space. Although the image-to-patient spatial registration methods using paired point or surface matching are used in some image-guided neurosurgery systems, the key problem is that the global optimization registration result cannot be achieved. Therefore, this paper proposes a new rotation invariant feature for decoupling rotation and translation space, based on which global optimization point set registration method is proposed. METHODS The new rotation invariant features, constructed based on the edges and the angles, are the rotation invariant, which has high feature resolution. Some of them are not only the rotation invariant, but also the translation invariant. To obtain the global optimal solution, branch and bound search strategy is used to search the parameter space of the translation and the computational cost is reduced simultaneously. The registration accuracy of the spatial registration method is analyzed by comparing the difference between the estimated transform and the standard transform to calculate the registration error. RESULTS To validate the performance of the spatial registration method proposed, the registration performance was analyzed by comparing the experimental results with the results of the two mainstream registration methods (the iterative closest point [ICP] registration method and the coherent point drift method). In the experiments, the comparison was based on the registration accuracy and the execution time. We show our registration method can obtain higher accuracy in a shorter time in most cases. At the same time, when using ICP to further refine our results, the ICP method can converge in a very short time, which also shows that our method provides a good initial pose for the ICP method and can help the ICP converge to the global optimal solution faster. Our method can achieve an average rotation error of 0.124 degrees and an average translation error of 0.38 mm on 10 clinical data. CONCLUSIONS The results reveal that the surface registration method based on translation rotation decoupling can achieve superior performance regarding both the registration accuracy and the time efficiency in the image-to-patient spatial registration.
Collapse
Affiliation(s)
- Kexue Fu
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China
| | - Xinrong Chen
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China
| |
Collapse
|
8
|
Liu Y, Yao D, Zhai Z, Wang H, Chen J, Wu C, Qiao H, Li H, Shi Y. Fusion of multimodality image and point cloud for spatial surface registration for knee arthroplasty. Int J Med Robot 2022; 18:e2426. [DOI: 10.1002/rcs.2426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 04/15/2022] [Accepted: 05/24/2022] [Indexed: 11/05/2022]
Affiliation(s)
- Yanjing Liu
- Digital Medical Research Center School of Basic Medical Sciences Fudan University Shanghai China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention Shanghai China
| | - Demin Yao
- Digital Medical Research Center School of Basic Medical Sciences Fudan University Shanghai China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention Shanghai China
| | - Zanjing Zhai
- Shanghai Key Laboratory of Orthopaedic Implants Shanghai China
- Department of Orthopaedic Surgery Shanghai Ninth People's Hospital Shanghai Jiao Tong University School of Medicine Shanghai China
| | - Hui Wang
- Digital Medical Research Center School of Basic Medical Sciences Fudan University Shanghai China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention Shanghai China
| | - Jiayi Chen
- Digital Medical Research Center School of Basic Medical Sciences Fudan University Shanghai China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention Shanghai China
| | - Chuanfu Wu
- Digital Medical Research Center School of Basic Medical Sciences Fudan University Shanghai China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention Shanghai China
| | - Hua Qiao
- Shanghai Key Laboratory of Orthopaedic Implants Shanghai China
- Department of Orthopaedic Surgery Shanghai Ninth People's Hospital Shanghai Jiao Tong University School of Medicine Shanghai China
| | - Huiwu Li
- Shanghai Key Laboratory of Orthopaedic Implants Shanghai China
- Department of Orthopaedic Surgery Shanghai Ninth People's Hospital Shanghai Jiao Tong University School of Medicine Shanghai China
| | - Yonghong Shi
- Digital Medical Research Center School of Basic Medical Sciences Fudan University Shanghai China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention Shanghai China
| |
Collapse
|
9
|
Hopfgartner A, Burns D, Suppiah S, Martin AR, Hardisty M, Whyne CM. Bullseye EVD: preclinical evaluation of an intra-procedural system to confirm external ventricular drainage catheter positioning. Int J Comput Assist Radiol Surg 2022; 17:1191-1199. [PMID: 35633491 DOI: 10.1007/s11548-022-02679-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 05/10/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE External ventricular drainage (EVD) is a life-saving procedure indicated for elevated intracranial pressure. A catheter is inserted into the ventricles to drain cerebrospinal fluid and release the pressure on the brain. However, the standard freehand EVD technique results in catheter malpositioning in up to 60.1% of procedures. This proof-of-concept study aimed to evaluate the registration accuracy of a novel image-based verification system "Bullseye EVD" in a preclinical cadaveric model of catheter placement. METHODS Experimentation was performed on both sides of 3 cadaveric heads (n = 6). After a pre-interventional CT scan, a guidewire simulating the EVD catheter was inserted as in a clinical EVD procedure. 3D structured light images (Einscan, Shining 3D, China) were acquired of an optical tracker placed over the guidewire on the surface of the scalp, along with three distinct cranial regions (scalp, face, and ear). A computer vision algorithm was employed to determine the guidewire position based on the pre-interventional CT scan and the intra-procedural optical imaging. A post-interventional CT scan was used to validate the performance of the Bullseye optical imaging system in terms of trajectory and offset errors. RESULTS Optical images which combined facial features and exposed scalp within the surgical field resulted in the lowest trajectory and offset errors of 1.28° ± 0.38° and 0.33 ± 0.19 mm, respectively. Mean duration of the optical imaging procedure was 128 ± 35 s. CONCLUSIONS The Bullseye EVD system presents an accurate patient-specific method to verify freehand EVD positioning. Use of facial features was critical to registration accuracy. Workflow automation and development of a user interface must be considered for future clinical evaluation.
Collapse
Affiliation(s)
- Adam Hopfgartner
- Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada
| | - David Burns
- Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada
- Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| | - Suganth Suppiah
- Division of Neurosurgery, University of Toronto, Toronto, ON, Canada
| | - Allan R Martin
- Department of Neurological Surgery, University of California, Davis, Sacramento, CA, USA
| | - Michael Hardisty
- Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada
- Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, Canada
| | - Cari M Whyne
- Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada.
- Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, Canada.
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
10
|
Yoo H, Sim T. Automated Machine Learning (AutoML)-based Surface Registration Methodology for Image-guided Surgical Navigation System. Med Phys 2022; 49:4845-4860. [PMID: 35543150 DOI: 10.1002/mp.15696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 04/05/2022] [Accepted: 04/19/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND While the surface registration technique has the advantage of being relatively safe and the operation time is short, it generally has the disadvantage of low accuracy. PURPOSE This research proposes automated machine learning (AutoML)-based surface registration to improve the accuracy of image-guided surgical navigation systems. METHODS The state-of-the-art surface registration concept is that first, using a neural network model, a new point-cloud that matches the facial information acquired by a passive probe of an optical tracking system (OTS) is extracted from the facial information obtained by computerized tomography (CT). Target registration error (TRE) representing the accuracy of surface registration is then calculated by applying the iterative closest point (ICP) algorithm to the newly extracted point-cloud and OTS information. In this process, the hyperparameters used in the neural network model and ICP algorithm are automatically optimized using Bayesian Optimization with Expected Improvement to yield improved registration accuracy. RESULTS Using the proposed surface registration methodology, the average TRE for the targets located in the sinus space and nasal cavity of the soft phantoms is (0.939 ± 0.375) mm, which shows 57.8 % improvement compared to the average TRE of (2.227 ± 0.193) mm calculated by the conventional surface registration method (p < 0.01). The performance of the proposed methodology is evaluated, and the average TREs computed by the proposed methodology and the conventional method are (0.767 ± 0.132) mm and (2.615 ± 0.378) mm, respectively. Additionally, for one healthy adult, the clinical applicability of the AutoML-based surface registration is also presented. CONCLUSION Our findings showed that the registration accuracy could be improved while maintaining the advantages of the surface registration technique. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hakje Yoo
- Korea University Research Institute for Medical Bigdata Science, College of Medicine, Korea University, 73 Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea
| | - Taeyong Sim
- Department of Artificial Intelligence, Sejong University, 209, Neungdong-ro, Gwangjin-gu, Seoul, 05006, Republic of Korea
| |
Collapse
|
11
|
Registration-free workflow for electromagnetic and optical navigation in orbital and craniofacial surgery. Sci Rep 2021; 11:18080. [PMID: 34508161 PMCID: PMC8433137 DOI: 10.1038/s41598-021-97706-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Accepted: 08/13/2021] [Indexed: 11/25/2022] Open
Abstract
The accuracy of intra-operative navigation is largely dependent on the intra-operative registration procedure. Next to accuracy, important factors to consider for the registration procedure are invasiveness, time consumption, logistical demands, user-dependency, compatibility and radiation exposure. In this study, a workflow is presented that eliminates the need for a registration procedure altogether: registration-free navigation. In the workflow, the maxillary dental model is fused to the pre-operative imaging data using commercially available virtual planning software. A virtual Dynamic Reference Frame on a splint is designed on the patient’s fused maxillary dentition: during surgery, the splint containing the reference frame is positioned on the patient’s dentition. This alleviates the need for any registration procedure, since the position of the reference frame is known from the design. The accuracy of the workflow was evaluated in a cadaver set-up, and compared to bone-anchored fiducial, virtual splint and surface-based registration. The results showed that accuracy of the workflow was greatly dependent on tracking technique used: the workflow was the most accurate with electromagnetic tracking, but the least accurate with optical tracking. Although this method offers a time-efficient, non-invasive, radiation-free automatic alternative for registration, clinical implementation is hampered by the unexplained differences in accuracy between tracking techniques.
Collapse
|
12
|
Li J, Deng Z, Shen N, He Z, Feng L, Li Y, Yao J. A fully automatic surgical registration method for percutaneous abdominal puncture surgical navigation. Comput Biol Med 2021; 136:104663. [PMID: 34375903 DOI: 10.1016/j.compbiomed.2021.104663] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 07/12/2021] [Accepted: 07/17/2021] [Indexed: 01/16/2023]
Abstract
Surgical registration that maps surgical space onto image space plays an important role in surgical navigation. Accurate surgical registration can help surgeons efficiently locate surgical instruments. The complicated marker-based surgical registration method is highly accurate, but it is time-consuming. Therefore, a marker-less surgical registration method with high-precision and high-efficiency is proposed without human intervention. Firstly, the surgical navigation system based on the multi-vision system is calibrated by using a specially-designed calibration board. When extracting the abdominal point cloud acquired by the structured light vision system, the constraint is constructed by using Computed Tomography (CT) image to filter out the points in irrelevant areas to improve the computational efficiency. The Coherent Point Drift (CPD) algorithm based on Gaussian Mixture Model (GMM) is applied in the registration of abdominal point cloud with lack of surface features. To enhance the efficiency of the CPD algorithm, firstly, the system calibration result is used in rough registration of the point cloud, and then the proper point cloud pretreatment method and its parameters are studied through experiments. Finally, the puncturing simulation experiments were carried out by using the abdominal phantom. The experimental results show that the proposed surgical registration method has high accuracy and efficiency, and has potential clinical application value.
Collapse
Affiliation(s)
- Jing Li
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Zongqian Deng
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Nanyan Shen
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China.
| | - Zhou He
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Lanyun Feng
- Department of Integrative Oncology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Yingjie Li
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Jia Yao
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| |
Collapse
|
13
|
Li W, Fan J, Li S, Tian Z, Zheng Z, Ai D, Song H, Yang J. Calibrating 3D Scanner in the Coordinate System of Optical Tracker for Image-To-Patient Registration. Front Neurorobot 2021; 15:636772. [PMID: 34054454 PMCID: PMC8160243 DOI: 10.3389/fnbot.2021.636772] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Three-dimensional scanners have been widely applied in image-guided surgery (IGS) given its potential to solve the image-to-patient registration problem. How to perform a reliable calibration between a 3D scanner and an external tracker is especially important for these applications. This study proposes a novel method for calibrating the extrinsic parameters of a 3D scanner in the coordinate system of an optical tracker. We bound an optical marker to a 3D scanner and designed a specified 3D benchmark for calibration. We then proposed a two-step calibration method based on the pointset registration technique and nonlinear optimization algorithm to obtain the extrinsic matrix of the 3D scanner. We applied repeat scan registration error (RSRE) as the cost function in the optimization process. Subsequently, we evaluated the performance of the proposed method on a recaptured verification dataset through RSRE and Chamfer distance (CD). In comparison with the calibration method based on 2D checkerboard, the proposed method achieved a lower RSRE (1.73 mm vs. 2.10, 1.94, and 1.83 mm) and CD (2.83 mm vs. 3.98, 3.46, and 3.17 mm). We also constructed a surgical navigation system to further explore the application of the tracked 3D scanner in image-to-patient registration. We conducted a phantom study to verify the accuracy of the proposed method and analyze the relationship between the calibration accuracy and the target registration error (TRE). The proposed scanner-based image-to-patient registration method was also compared with the fiducial-based method, and TRE and operation time (OT) were used to evaluate the registration results. The proposed registration method achieved an improved registration efficiency (50.72 ± 6.04 vs. 212.97 ± 15.91 s in the head phantom study). Although the TRE of the proposed registration method met the clinical requirements, its accuracy was lower than that of the fiducial-based registration method (1.79 ± 0.17 mm vs. 0.92 ± 0.16 mm in the head phantom study). We summarized and analyzed the limitations of the scanner-based image-to-patient registration method and discussed its possible development.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
14
|
Fan Y, Yao X, Xu X. A robust automated surface-matching registration method for neuronavigation. Med Phys 2020; 47:2755-2767. [PMID: 32187386 DOI: 10.1002/mp.14145] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 02/20/2020] [Accepted: 03/07/2020] [Indexed: 11/10/2022] Open
Abstract
PURPOSE The surface-matching registration method in the current neuronavigation completes the coarse registration mainly by manually selecting anatomical landmarks, which increases the registration time, makes the automatic registration impossible and sometimes results in mismatch. It may be more practical to use a fast, accurate, and automatic spatial registration method for the patient-to-image registration. METHODS A coarse-to-fine spatial registration method to automatically register the patient space to the image space without placing any markers on the head of the patient was proposed. Three-dimensional (3D) keypoints were extracted by 3D Harris corner detector from the point clouds in the patient and image spaces, and used as input to the 4-points congruent sets (4PCS) algorithm which automatically registered the keypoints in the patient space with the keypoints in the image space without any assumptions about initial alignment. Coarsely aligned point clouds in the patient and image space were then fine-registered with a variant of the iterative closest point (ICP) algorithm. Two experiments were designed based on one phantom and five patients to validate the efficiency and effectiveness of the proposed method. RESULTS Keypoints were extracted within 7.0 s with a minimum threshold 0.001. In the phantom experiment, the mean target registration error (TRE) of 15 targets on the surface of the elastic phantom in the five experiments was 1.17 ± 0.04 mm, and the average registration time was 17.4 s. In the clinical experiments, the mean TRE of the targets on the first, second, third, fourth, and fifth patient's head surface were 1.70 ± 0.32 mm, 1.83 ± 0.38 mm, 1.64 ± 0.3 mm, 1.67 ± 0.35 mm, and 1.72 ± 0.31 mm, respectively, and the average registration time was 21.4 s. Compared with the method only based on the 4PCS and ICP algorithm and the current clinical method, the proposed method has obvious speed advantage while ensuring the registration accuracy. CONCLUSIONS The proposed method greatly improves the registration speed while guaranteeing the equivalent or higher registration accuracy, and avoids a tedious manual process for the coarse registration.
Collapse
Affiliation(s)
- Yifeng Fan
- School of Medical Imaging, Hangzhou Medical College, Hangzhou, PR China
| | - Xufeng Yao
- College of Medical Imaging, Shanghai University of Medicine & Healthy Science, Shanghai, PR China
| | - Xiufang Xu
- School of Medical Imaging, Hangzhou Medical College, Hangzhou, PR China
| |
Collapse
|
15
|
Bow H, Yang X, Chotai S, Feldman M, Yu H, Englot DJ, Miga MI, Pruthi S, Dawant BM, Parker SL. Initial Experience with Using a Structured Light 3D Scanner and Image Registration to Plan Bedside Subdural Evacuating Port System Placement. World Neurosurg 2020; 137:350-356. [PMID: 32032785 DOI: 10.1016/j.wneu.2020.01.203] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Revised: 01/26/2020] [Accepted: 01/27/2020] [Indexed: 10/25/2022]
Abstract
BACKGROUND Chronic subdural hematoma evacuation can be achieved in select patients through bedside placement of the Subdural Evacuation Port System (SEPS; Medtronic, Inc., Dublin, Ireland). This procedure involves drilling a burr hole at the thickest part of the hematoma. Identifying this location is often difficult, given the variable tilt of available imaging and distant anatomic landmarks. This paper evaluates the feasibility and accuracy of a bedside navigation system that relies on visible light-based 3-dimensional (3D) scanning and image registration to a pre-procedure computed tomography scan. The information provided by this system may increase accuracy of the burr hole location. METHODS In Part 1, the accuracy of this system was evaluated using a rigid 3D printed phantom head with implanted fiducials. In Part 2, the navigation system was tested on 3 patients who underwent SEPS placement. RESULTS The error in registration of this system was less than 2.5 mm when tested on a rigid 3D printed phantom head. Fiducials located in the posterior aspect of the head were difficult to reliably capture. For the 3 patients who underwent 5 SEPS placements, the distance between anticipated SEPS burr hole location based on registration and actual burr hole location was less than 1cm. CONCLUSIONS A bedside cranial navigation system based on 3D scanning and image registration has been introduced. Such a system may increase the success rate of bedside procedures, such as SEPS placement. However, technical challenges such as the ability to scan hair and practical challenges such as minimization of patient movement during scans must be overcome.
Collapse
Affiliation(s)
- Hansen Bow
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
| | - Xiaochen Yang
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Silky Chotai
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Michael Feldman
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Hong Yu
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Dario J Englot
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Michael I Miga
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Sumit Pruthi
- Department of Radiology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Benoit M Dawant
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Scott L Parker
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
16
|
Lee S, Shim S, Ha HG, Lee H, Hong J. Simultaneous Optimization of Patient-Image Registration and Hand-Eye Calibration for Accurate Augmented Reality in Surgery. IEEE Trans Biomed Eng 2020; 67:2669-2682. [PMID: 31976878 DOI: 10.1109/tbme.2020.2967802] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Augmented reality (AR) navigation using a position sensor in endoscopic surgeries relies on the quality of patient-image registration and hand-eye calibration. Conventional methods collect the necessary data to compute two output transformation matrices separately. However, the AR display setting during surgery generally differs from that during preoperative processes. Although conventional methods can identify optimal solutions under initial conditions, AR display errors are unavoidable during surgery owing to the inherent computational complexity of AR processes, such as error accumulation over successive matrix multiplications, and tracking errors of position sensor. METHODS We propose the simultaneous optimization of patient-image registration and hand-eye calibration in an AR environment before surgery. The relationship between the endoscope and a virtual object to overlay is first calculated using an endoscopic image, which also functions as a reference during optimization. After including the tracking information from the position sensor, patient-image registration and hand-eye calibration are optimized in terms of least-squares. RESULTS Experiments with synthetic data verify that the proposed method is less sensitive to computation and tracking errors. A phantom experiment with a position sensor is also conducted. The accuracy of the proposed method is significantly higher than that of the conventional method. CONCLUSION The AR accuracy of the proposed method is compared with those of the conventional ones, and the superiority of the proposed method is verified. SIGNIFICANCE This study demonstrates that the proposed method exhibits substantial potential for improving AR navigation accuracy.
Collapse
|
17
|
Rameau A. Pilot study for a novel and personalized voice restoration device for patients with laryngectomy. Head Neck 2019; 42:839-845. [PMID: 31876090 DOI: 10.1002/hed.26057] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 11/06/2019] [Accepted: 12/10/2019] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND The main modalities for voice restoration after laryngectomy are the electrolarynx, and the tracheoesophageal puncture [Correction added on 30 January 2020 after first online publication: The preceding sentence has been revised. It originally read "The main modalities for voice restoration after laryngectomy are the electrolarynx and the tracheoesophageal puncture."]. All have limitations and new technologies may offer innovative alternatives via silent speech. OBJECTIVE To describe a novel and personalized method of voice restoration using machine learning applied to electromyographic signal from articulatory muscles for the recognition of silent speech in a patient with total laryngectomy. METHODS Surface electromyographic (sEMG) signals of articulatory muscles were recorded from the face and neck of a patient with total laryngectomy who was articulating words silently. These sEMG signals were then used for automatic speech recognition via machine learning. Sensor placement was tailored to the patient's unique anatomy, following radiation and surgery. A personalized wearable mask covering the sensors was designed using 3D scanning and 3D printing. RESULTS Using seven sEMG sensors on the patient's face and neck and two grounding electrodes, we recorded EMG data while he was mouthing "Tedd" and "Ed." With data from 75 utterances for each of these words, we discriminated the sEMG signal with 86.4% accuracy using an XGBoost machine-learning model. CONCLUSIONS This pilot study demonstrates the feasibility of sEMG-based alaryngeal speech recognition, using tailored sensor placement and a personalized wearable device. Further refinement of this approach could allow translation of silently articulated speech into a synthesized voiced speech via portable devices.
Collapse
Affiliation(s)
- Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medical College, Sean Parker Institute for the Voice, New York, New York
| |
Collapse
|
18
|
An Automatic Spatial Registration Method for Image-Guided Neurosurgery System. J Craniofac Surg 2019; 30:e344-e350. [PMID: 30817512 DOI: 10.1097/scs.0000000000005330] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE This study aimed to investigate the feasibility of an automatic marker-free patient-to-image spatial registration method based on the 4-points congruent sets (4PCS) and iterative closest point (ICP) algorithm for the image-guided neurosurgery system (IGNS). METHODS A portable scanner was used to obtain the point cloud of the patient's entire head. The 4PCS algorithm, which is resilient to noise and outliers, automatically registered the point cloud in the patient space to the surface reconstructed from the patient's preoperative images in the image space without any assumptions about initial alignment. A variant of the ICP algorithm was then used to finish the fine registration. Two phantoms and 3 patients' experiments were performed to demonstrate the effectiveness of the proposed method. RESULTS In the phantom experiments, the mean target registration error of 15 targets on the surface of the rigid and the elastic phantoms were 1.02 ± 0.18 mm and 1.27 ± 0.36 mm, respectively. In the clinical experiments, the mean target registration error of 7 targets on the first, second and third patient's head were 1.88 ± 0.19 mm, 1.84 ± 0.19 mm, and 1.89 ± 0.18 mm, respectively, which was sufficient to meet clinical requirements. The registration accuracy and registration time using the proposed method are better than that using the method based on manually coarse registration and automatic fine registration. CONCLUSIONS It is feasible to use the automatic spatial registration method based on the 4PCS and ICP algorithm for the IGNS. Moreover, it can replace the spatial registration method based on manually selected anatomical landmarks combined with the automatic fine registration in the currently used IGNS.
Collapse
|
19
|
Regional-surface-based registration for image-guided neurosurgery: effects of scan modes on registration accuracy. Int J Comput Assist Radiol Surg 2019; 14:1303-1315. [PMID: 31055765 DOI: 10.1007/s11548-019-01990-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 04/24/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE The conventional surface-based method only registers the facial zone with preoperative point cloud, resulting in low accuracy away from the facial area. Acquiring a point cloud of the entire head for registration can improve registration accuracy in all parts of the head. However, it takes a long time to collect a point cloud of the entire head. It may be more practical to selectively scan part of the head to ensure high registration accuracy in the surgical area of interest. In this study, we investigate the effects of different scan regions on registration errors in different target areas when using a surface-based registration method. METHODS We first evaluated the correlation between the laser scan resolution and registration accuracy to determine an appropriate scan resolution. Then, with the appropriate resolution, we explored the effects of scan modes on registration error in computer simulation experiments, phantom experiments and two clinical cases. The scan modes were designed based on different combinations of five zones of the head surface, i.e., the sphenoid-frontal zone, parietal zone, left temporal zone, right temporal zone and occipital zone. In the phantom experiment, a handheld scanner was used to acquire a point cloud of the head. A head model containing several tumors was designed, enabling us to calculate the target registration errors deep in the brain to evaluate the effect of regional-surface-based registration. RESULT The optimal scan modes for tumors located in the sphenoid-frontal, parietal and temporal areas are mode 4 (i.e., simultaneously scanning the sphenoid-frontal zone and the temporal zone), mode 4 and mode 6 (i.e., simultaneously scanning the sphenoid-frontal zone, the temporal zone and the parietal zone), respectively. For the tumor located in the occipital area, no modes were able to achieve reliable accuracy. CONCLUSION The results show that selecting an appropriate scan resolution and scan mode can achieve reliable accuracy for use in sphenoid-frontal, parietal and temporal area surgeries while effectively reducing the operation time.
Collapse
|
20
|
Lin Q, Cai K, Yang R, Xiao W, Huang J, Zhan Y, Zhuang J. Geometric calibration of markerless optical surgical navigation system. Int J Med Robot 2019; 15:e1978. [PMID: 30556944 DOI: 10.1002/rcs.1978] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 12/11/2018] [Accepted: 12/12/2018] [Indexed: 12/19/2022]
Abstract
BACKGROUND Patient-to-image registration is required for image-guided surgical navigation, but marker-based registration is time consuming and is subject to manual error. Markerless registration is an alternative solution to avoid these issues. METHODS This study designs a calibration board and proposes a geometric calibration method to calibrate the near-infrared tracking and structured light components of the proposed optical surgical navigation system simultaneously. RESULTS A planar board and a cylinder are used to evaluate the accuracy of calibration. The mean error for the board experiment is 0.035 mm, and the diameter error for the cylinder experiment is 0.119 mm. A calibration board is reconstructed to evaluate the accuracy of the calibration, and the measured mean error is 0.012 mm. A head phantom is reconstructed and tracked by the proposed optical surgical navigation system. The tracking error is less than 0.3 mm. CONCLUSIONS Experimental results show that the proposed method obtains high accessibility and accuracy and satisfies application requirements.
Collapse
Affiliation(s)
- Qinyong Lin
- School of Medicine, South China University of Technology, Guangzhou, China
| | - Ken Cai
- School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,College of Automation, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Rongqian Yang
- Department of Biomedical Engineering, South China University of Technology, Guangzhou, China.,School of Medicine, Yale University, New Haven, Connecticut.,Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou, China
| | - Weihu Xiao
- Department of Biomedical Engineering, South China University of Technology, Guangzhou, China
| | - Jinhua Huang
- Department of Minimally Invasive Interventional Radiology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yinwei Zhan
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China
| | - Jian Zhuang
- Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong General Hospital, Guangdong Academy of Medical Science, Guangzhou, China
| |
Collapse
|
21
|
Meng F, Zhai F, Zeng B, Ding H, Wang G. An automatic markerless registration method for neurosurgical robotics based on an optical camera. Int J Comput Assist Radiol Surg 2017; 13:253-265. [DOI: 10.1007/s11548-017-1675-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Accepted: 10/11/2017] [Indexed: 10/18/2022]
|
22
|
Liu Y, Song Z, Wang M. A new robust markerless method for automatic image-to-patient registration in image-guided neurosurgery system. Comput Assist Surg (Abingdon) 2017; 22:319-325. [DOI: 10.1080/24699322.2017.1389411] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Affiliation(s)
- Yinlong Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| |
Collapse
|
23
|
Shinkai H, Yamamoto M, Tatebe M, Iwatsuki K, Kurimoto S, Hirata H. Non-invasive volumetric analysis of asymptomatic hands using a 3-D scanner. PLoS One 2017; 12:e0182675. [PMID: 28796816 PMCID: PMC5552111 DOI: 10.1371/journal.pone.0182675] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Accepted: 07/21/2017] [Indexed: 11/24/2022] Open
Abstract
Hand swelling is one of the symptoms often seen in practice, but none of the available morphometric methods can quickly and efficiently quantify hand volume in an objective manner, and the current gold-standard volume measurement requires immersion in water, which can be difficult to use. Therefore, we aimed to analyze the accuracy of using 3-dimensional (3-D) scanning to measure hand volume. First, we compared the hand volume calculated using the 3-D scanner to that calculated from the conventional method among 109 volunteers to determine the reliability of 3-D measurements. We defined the beginning of the hand as the distal wrist crease, and 3-D forms of the hands were captured by the 3-D scanning system. Second, 238 volunteers (87 men, 151 women) with no disease or history of hand surgery underwent 3-D scanning. Data collected included age, height, weight, and shoe size. The wrist circumference (WC) and the distance between distal wrist crease and tip of middle finger (DDT) were measured. Statistical analyses were performed using linear regression to investigate the relationship between the hand volume and these parameters. In the first study, a significantly strong positive correlation was observed [R = 0.98] between the hand volume calculated via 3-D scanning and that calculated via the conventional method. In the second study, no significant differences between the volumes, WC or DDT of right and left hands were found. The correlations of hand volume with weight, WC, and DDT were strong. We created a formula to predict the hand volume using these parameters; these variables explained approximately 80% of the predicted volume. We confirmed that the new 3-D scanning method, which is performed without touching the hand and can record the form of the hand, yields an accurate volumetric analysis of an asymptomatic hand.
Collapse
Affiliation(s)
- Hiroki Shinkai
- Department of Hand Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
- * E-mail:
| | - Michiro Yamamoto
- Department of Hand Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Masahiro Tatebe
- Department of Hand Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Katsuyuki Iwatsuki
- Department of Hand Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Shigeru Kurimoto
- Department of Hand Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Hitoshi Hirata
- Department of Hand Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| |
Collapse
|
24
|
A Surface-Based Spatial Registration Method Based on Sense Three-Dimensional Scanner. J Craniofac Surg 2017; 28:157-160. [PMID: 27941549 DOI: 10.1097/scs.0000000000003283] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE The purpose of this study was to investigate the feasibility of a surface-based registration method based on a low-cost, hand-held Sense three-dimensional (3D) scanner in image-guided neurosurgery system. METHODS The scanner was calibrated prior and fixed on a tripod before registration. During registration, a part of the head surface was scanned at first and the spatial position of the adapter was recorded. Then the scanner was taken off from the tripod and the entire head surface was scanned by moving the scanner around the patient's head. All the scan points were aligned to the recorded spatial position to form a unique point cloud of the head by the automatic mosaic function of the scanner. The coordinates of the scan points were transformed from the device space to the adapter space by a calibration matrix, and then to the patient space. A 2-step patient-to-image registration method was then performed to register the patient space to the image space. RESULTS The experimental results showed that the mean target registration error of 15 targets on the surface of the phantom was 1.61±0.09 mm. In a clinical experiment, the mean target registration error of 7 targets on the patient's head surface was 2.50±0.31 mm, which was sufficient to meet clinical requirements. CONCLUSIONS It is feasible to use the Sense 3D scanner for patient-to-image registration, and the low-cost Sense 3D scanner can take the place of the current used scanner in the image-guided neurosurgery system.
Collapse
|
25
|
Wang D, Ma D, Wong ML, Wáng YXJ. Recent advances in surgical planning & navigation for tumor biopsy and resection. Quant Imaging Med Surg 2015; 5:640-8. [PMID: 26682133 DOI: 10.3978/j.issn.2223-4292.2015.10.03] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
This paper highlights recent advancements in imaging technologies for surgical planning and navigation in tumor biopsy and resection which need high-precision in detection and characterization of lesion margin in preoperative planning and intraoperative navigation. Multimodality image-guided surgery platforms brought great benefits in surgical planning and operation accuracy via registration of various data sets with information on morphology [X-ray, magnetic resonance (MR), computed tomography (CT)], function connectivity [functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), rest-status fMRI], or molecular activity [positron emission tomography (PET)]. These image-guided platforms provide a correspondence between the pre-operative surgical planning and intra-operative procedure. We envisage that the combination of advanced multimodal imaging, three-dimensional (3D) printing, and cloud computing will play increasingly important roles in planning and navigation of surgery for tumor biopsy and resection in the coming years.
Collapse
Affiliation(s)
- Defeng Wang
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Diya Ma
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Matthew Lun Wong
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Yì Xiáng J Wáng
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|