1
|
Sabelis JF, Schreurs R, Dubois L, Becking AG. Clinical validation of the virtual splint registration workflow for craniomaxillofacial surgery. Int J Oral Maxillofac Surg 2025:S0901-5027(25)00015-3. [PMID: 39919959 DOI: 10.1016/j.ijom.2025.01.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Revised: 01/14/2025] [Accepted: 01/21/2025] [Indexed: 02/09/2025]
Abstract
Accurate registration is vital to transfer the virtual surgical plan during surgery. This study's goal was to present and clinically validate a virtual splint registration workflow. Ten dentate patients requiring revision surgery were included. Specific inclusion criterion for this study was the presence of at least two osteosynthesis screws on the orbital rim from a previous surgery. Dedicated orthognathic surgery software was used to fuse the maxillary dental scan with the computed tomography and generate a dental splint, which was imported into the navigation software and augmented with fiducial markers. Registration points were indicated virtually and the augmented splint was three-dimensionally printed. Intraoperatively, the splint was fitted on the maxillary dentition and the fiducial markers were used for registration. Accuracy of the registration procedure was quantified by calculating the difference between the landmarks acquired by indicating the pre-existing osteosynthesis material with the navigation pointer and in the virtual planning software. After acquisition of the landmarks, the screws were removed and surgery proceeded according to plan. A median target registration error of 1.53 mm was found. The advantages of the virtual splint registration workflow are that it does not require extensive computer-aided design skills or repeated preoperative imaging, and is non-invasive.
Collapse
Affiliation(s)
- J F Sabelis
- Department of Oral and Maxillofacial Surgery, Amsterdam University Medical Centre (UMC), AMC, Academic Center for Dentistry Amsterdam (ACTA), University of Amsterdam, Amsterdam, the Netherlands.
| | - R Schreurs
- Department of Oral and Maxillofacial Surgery, Amsterdam University Medical Centre (UMC), AMC, Academic Center for Dentistry Amsterdam (ACTA), University of Amsterdam, Amsterdam, the Netherlands; Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre Nijmegen, Nijmegen, the Netherlands
| | - L Dubois
- Department of Oral and Maxillofacial Surgery, Amsterdam University Medical Centre (UMC), AMC, Academic Center for Dentistry Amsterdam (ACTA), University of Amsterdam, Amsterdam, the Netherlands
| | - A G Becking
- Department of Oral and Maxillofacial Surgery, Amsterdam University Medical Centre (UMC), AMC, Academic Center for Dentistry Amsterdam (ACTA), University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
2
|
Özbek Y, Bárdosi Z, Freysinger W. Noctopus: a novel device and method for patient registration and navigation in image-guided cranial surgery. Int J Comput Assist Radiol Surg 2024; 19:2371-2380. [PMID: 38748051 PMCID: PMC11607009 DOI: 10.1007/s11548-024-03135-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 03/28/2024] [Indexed: 08/24/2024]
Abstract
PURPOSE A patient registration and real-time surgical navigation system and a novel device and method (Noctopus) is presented. With any tracking system technology and a patient/target-specific registration marker configuration, submillimetric target registration error (TRE), high-precise application accuracy for single or multiple anatomical targets in image-guided neurosurgery or ENT surgery is realized. METHODS The system utilizes the advantages of marker-based registration technique and allows to perform automatized patient registration using on the device attached and with patient scanned four fiducial markers. The best possible sensor/marker positions around the patient's head are determined for single or multiple region(s) of interest (target/s) in the anatomy. Once brought at the predetermined positions the device can be operated with any tracking system for registration purposes. RESULTS Targeting accuracy was evaluated quantitatively at various target positions on a phantom skull. The target registration error (TRE) was measured on individual targets using an electromagnetic tracking system. The overall averaged TRE was 0.22 ± 0.08 mm for intraoperative measurements. CONCLUSION An automatized patient registration system using optimized patient-/target-specific marker configurations is proposed. High-precision and user-error-free intraoperative surgical navigation with minimum number of registration markers and sensors is realized. The targeting accuracy is significantly improved in minimally invasive neurosurgical and ENT interventions.
Collapse
Affiliation(s)
- Yusuf Özbek
- Medical University of Innsbruck, University ENT Clinic, Innsbruck, Austria.
| | - Zoltán Bárdosi
- Medical University of Innsbruck, University ENT Clinic, Innsbruck, Austria
| | | |
Collapse
|
3
|
Trumpour T, du Toit C, van Gaalen A, Park CKS, Rodgers JR, Mendez LC, Surry K, Fenster A. Three-dimensional trans-rectal and trans-abdominal ultrasound image fusion for the guidance of gynecologic brachytherapy procedures: a proof of concept study. Sci Rep 2024; 14:18459. [PMID: 39117682 PMCID: PMC11310523 DOI: 10.1038/s41598-024-69211-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 08/01/2024] [Indexed: 08/10/2024] Open
Abstract
High dose-rate brachytherapy is a treatment technique for gynecologic cancers where intracavitary applicators are placed within the patient's pelvic cavity. To ensure accurate radiation delivery, localization of the applicator at the time of insertion is vital. This study proposes a novel method for acquiring, registering, and fusing three-dimensional (3D) trans-abdominal and 3D trans-rectal ultrasound (US) images for visualization of the pelvic anatomy and applicators during gynecologic brachytherapy. The workflow was validated using custom multi-modal pelvic phantoms and demonstrated during two patient procedures. Experiments were performed for three types of intracavitary applicators: ring-and-tandem, ring-and-tandem with interstitial needles, and tandem-and-ovoids. Fused 3D US images were registered to magnetic resonance (MR) and computed tomography (CT) images for validation. The target registration error (TRE) and fiducial localization error (FLE) were calculated to quantify the accuracy of our fusion technique. For both phantom and patient images, TRE and FLE across all modality registrations (3D US versus MR or CT) resulted in mean ± standard deviation of 4.01 ± 1.01 mm and 0.43 ± 0.24 mm, respectively. This work indicates proof of concept for conducting further clinical studies leveraging 3D US imaging as an accurate, accessible alternative to advanced modalities for localizing brachytherapy applicators.
Collapse
Affiliation(s)
- Tiana Trumpour
- Department of Medical Biophysics, Western University, London, Canada.
- Robarts Research Institute, London, Canada.
| | | | - Alissa van Gaalen
- Department of Physics and Astronomy, University of Waterloo, Waterloo, Canada
| | - Claire K S Park
- Brigham and Women's Hospital and Dana-Farber Cancer Institute, Department of Radiation Oncology, Harvard Medical School, Boston, USA
| | - Jessica R Rodgers
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, Canada
| | | | - Kathleen Surry
- Department of Medical Biophysics, Western University, London, Canada
- Verspeeten Family Cancer Centre, London, Canada
- Department of Oncology, Western University, London, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Western University, London, Canada
- Robarts Research Institute, London, Canada
| |
Collapse
|
4
|
Al-Jaberi F, Moeskes M, Skalej M, Fachet M, Hoeschen C. 3D-visualization of segmented contacts of directional deep brain stimulation electrodes via registration and fusion of CT and FDCT. EJNMMI REPORTS 2024; 8:17. [PMID: 38872028 PMCID: PMC11286893 DOI: 10.1186/s41824-024-00208-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 05/02/2024] [Indexed: 06/15/2024]
Abstract
OBJECTIVES 3D-visualization of the segmented contacts of directional deep brain stimulation (DBS) electrodes is desirable since knowledge about the position of every segmented contact could shorten the timespan for electrode programming. CT cannot yield images fitting that purpose whereas highly resolved flat detector computed tomography (FDCT) can accurately image the inner structure of the electrode. This study aims to demonstrate the applicability of image fusion of highly resolved FDCT and CT to produce highly resolved images that preserve anatomical context for subsequent fusion to preoperative MRI for eventually displaying segmented contactswithin anatomical context in future studies. MATERIAL AND METHODS Retrospectively collected datasets from 15 patients who underwent bilateral directional DBS electrode implantation were used. Subsequently, after image analysis, a semi-automated 3D-registration of CT and highly resolved FDCT followed by image fusion was performed. The registration accuracy was assessed by computing the target registration error. RESULTS Our work demonstrated the feasibility of highly resolved FDCT to visualize segmented electrode contacts in 3D. Semiautomatic image registration to CT was successfully implemented in all cases. Qualitative evaluation by two experts revealed good alignment regarding intracranial osseous structures. Additionally, the average for the mean of the target registration error over all patients, based on the assessments of two raters, was computed to be 4.16 mm. CONCLUSION Our work demonstrated the applicability of image fusion of highly resolved FDCT to CT for a potential workflow regarding subsequent fusion to MRI in the future to put the electrodes in an anatomical context.
Collapse
Affiliation(s)
- Fadil Al-Jaberi
- Chair of Medical Systems Technology, Institute for Medical Technology, Faculty of Electrical Engineering and Information Technology, Otto von Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany.
- Research Department, Missan Oil Company, Iraqi Ministry of Oil, Baghdad, Iraq.
| | - Matthias Moeskes
- Institute of Biometry and Medical Informatics, Medical Faculty, Otto von Guericke University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
| | - Martin Skalej
- Neuroradiology, Medical Faculty, Martin Luther University Halle-Wittenberg, Ernst-Grube-Straße 40, 06120, Halle, Germany
| | - Melanie Fachet
- Chair of Medical Systems Technology, Institute for Medical Technology, Faculty of Electrical Engineering and Information Technology, Otto von Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany
| | - Christoph Hoeschen
- Chair of Medical Systems Technology, Institute for Medical Technology, Faculty of Electrical Engineering and Information Technology, Otto von Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany
| |
Collapse
|
5
|
Taleb A, Leclerc S, Hussein R, Lalande A, Bozorg-Grayeli A. Registration of preoperative temporal bone CT-scan to otoendoscopic video for augmented-reality based on convolutional neural networks. Eur Arch Otorhinolaryngol 2024; 281:2921-2930. [PMID: 38200355 DOI: 10.1007/s00405-023-08403-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 12/04/2023] [Indexed: 01/12/2024]
Abstract
PURPOSE Patient-to-image registration is a preliminary step required in surgical navigation based on preoperative images. Human intervention and fiducial markers hamper this task as they are time-consuming and introduce potential errors. We aimed to develop a fully automatic 2D registration system for augmented reality in ear surgery. METHODS CT-scans and corresponding oto-endoscopic videos were collected from 41 patients (58 ears) undergoing ear examination (vestibular schwannoma before surgery, profound hearing loss requiring cochlear implant, suspicion of perilymphatic fistula, contralateral ears in cases of unilateral chronic otitis media). Two to four images were selected from each case. For the training phase, data from patients (75% of the dataset) and 11 cadaveric specimens were used. Tympanic membranes and malleus handles were contoured on both video images and CT-scans by expert surgeons. The algorithm used a U-Net network for detecting the contours of the tympanic membrane and the malleus on both preoperative CT-scans and endoscopic video frames. Then, contours were processed and registered through an iterative closest point algorithm. Validation was performed on 4 cases and testing on 6 cases. Registration error was measured by overlaying both images and measuring the average and Hausdorff distances. RESULTS The proposed registration method yielded a precision compatible with ear surgery with a 2D mean overlay error of 0.65 ± 0.60 mm for the incus and 0.48 ± 0.32 mm for the round window. The average Hausdorff distance for these 2 targets was 0.98 ± 0.60 mm and 0.78 ± 0.34 mm respectively. An outlier case with higher errors (2.3 mm and 1.5 mm average Hausdorff distance for incus and round window respectively) was observed in relation to a high discrepancy between the projection angle of the reconstructed CT-scan and the video image. The maximum duration for the overall process was 18 s. CONCLUSIONS A fully automatic 2D registration method based on a convolutional neural network and applied to ear surgery was developed. The method did not rely on any external fiducial markers nor human intervention for landmark recognition. The method was fast and its precision was compatible with ear surgery.
Collapse
Affiliation(s)
- Ali Taleb
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France.
| | - Sarah Leclerc
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
| | | | - Alain Lalande
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
- Medical Imaging Department, Dijon University Hospital, 21000, Dijon, France
| | - Alexis Bozorg-Grayeli
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
- ENT Department, Dijon University Hospital, 21000, Dijon, France
| |
Collapse
|
6
|
Ai L, Liu Y, Armand M, Kheradmand A, Martin-Gomez A. On the Fly Robotic-Assisted Medical Instrument Planning and Execution Using Mixed Reality. 2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) 2024:13192-13199. [DOI: 10.1109/icra57147.2024.10611515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Letian Ai
- Johns Hopkins University,Biomechanical- and Image-Guided Surgical Systems (BIGSS) Laboratory Within LCSR,Baltimore,MD,USA
| | - Yihao Liu
- Johns Hopkins University,Biomechanical- and Image-Guided Surgical Systems (BIGSS) Laboratory Within LCSR,Baltimore,MD,USA
| | - Mehran Armand
- Johns Hopkins University,Biomechanical- and Image-Guided Surgical Systems (BIGSS) Laboratory Within LCSR,Baltimore,MD,USA
| | - Amir Kheradmand
- Johns Hopkins School of Medicine,Department of Neurology and Department of Neuroscience,Baltimore,MD,USA
| | - Alejandro Martin-Gomez
- Johns Hopkins University,Biomechanical- and Image-Guided Surgical Systems (BIGSS) Laboratory Within LCSR,Baltimore,MD,USA
| |
Collapse
|
7
|
Shim S, Seo J. Robotic system for nasopharyngeal swab sampling based on remote center of motion mechanism. Int J Comput Assist Radiol Surg 2024; 19:395-403. [PMID: 37985641 DOI: 10.1007/s11548-023-03032-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/16/2023] [Indexed: 11/22/2023]
Abstract
PURPOSE In this study, a robotic system is proposed for nasopharyngeal (NP) swab sampling with high safety and efficiency. Most existing swab-sampling robots have more than six degrees of freedom (DOFs). However, not all six DOFs are necessarily required for NP swab sampling. A high number of DOFs can cause safety problems, such as collisions between the robot and patient. METHOD We developed a new type of robot with four DOFs for NP swab sampling that consists of a two DOFs remote center of motion (RCM) mechanism, a two DOFs insertion mechanism, and a nostril support unit. With the nostril support unit, the robot no longer needs to adjust the insertion position of the swab. The proposed robot enables the insertion orientation and depth to be adjusted according to different postures or facial shapes of the subject. For intuitive and precise remote control of the robot, a dedicated master device for the RCM and a visual feedback system were developed. RESULT The effectiveness of the robotic system was demonstrated by repeatability, RCM accuracy, tracking accuracy, and in vitro phantom experiments. The average tracking error between the master device and the robot was less than 2 mm. The contact force exerted on the swab prior to reaching the nasopharynx was less than 0.04 N, irrespective of the phantom's pose. CONCLUSION This study confirmed that the RCM-based robotic system is effective and safe for NP swab sampling while using minimal DOFs.
Collapse
Affiliation(s)
- Seongbo Shim
- Department of Medical Robotics, Korea Institute of Machinery and Materials, Daegu, 42994, South Korea
| | - Joonho Seo
- Department of Medical Robotics, Korea Institute of Machinery and Materials, Daegu, 42994, South Korea.
| |
Collapse
|
8
|
Sun Y, Gu Y, Shi F, Liu J, Li G, Feng Q, Shen D. Coarse-to-fine registration and time-intensity curves constraint for liver DCE-MRI synthesis. Comput Med Imaging Graph 2024; 111:102319. [PMID: 38147798 DOI: 10.1016/j.compmedimag.2023.102319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 11/03/2023] [Accepted: 12/06/2023] [Indexed: 12/28/2023]
Abstract
Image registration plays a crucial role in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), used as a fundamental step for the subsequent diagnosis of benign and malignant tumors. However, the registration process encounters significant challenges due to the substantial intensity changes observed among different time points, resulting from the injection of contrast agents. Furthermore, previous studies have often overlooked the alignment of small structures, such as tumors and vessels. In this work, we propose a novel DCE-MRI registration framework that can effectively align the DCE-MRI time series. Specifically, our DCE-MRI registration framework consists of two steps, i.e., a de-enhancement synthesis step and a coarse-to-fine registration step. In the de-enhancement synthesis step, a disentanglement network separates DCE-MRI images into a content component representing the anatomical structures and a style component indicating the presence or absence of contrast agents. This step generates synthetic images where the contrast agents are removed from the original images, alleviating the negative effects of intensity changes on the subsequent registration process. In the registration step, we utilize a coarse registration network followed by a refined registration network. These two networks facilitate the estimation of both the coarse and refined displacement vector fields (DVFs) in a pairwise and groupwise registration manner, respectively. In addition, to enhance the alignment accuracy for small structures, a voxel-wise constraint is further conducted by assessing the smoothness of the time-intensity curves (TICs). Experimental results on liver DCE-MRI demonstrate that our proposed method outperforms state-of-the-art approaches, offering more robust and accurate alignment results.
Collapse
Affiliation(s)
- Yuhang Sun
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Yuning Gu
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jiameng Liu
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Guoqiang Li
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
| | - Dinggang Shen
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
9
|
Campisi BM, Costanzo R, Gulino V, Avallone C, Noto M, Bonosi L, Brunasso L, Scalia G, Iacopino DG, Maugeri R. The Role of Augmented Reality Neuronavigation in Transsphenoidal Surgery: A Systematic Review. Brain Sci 2023; 13:1695. [PMID: 38137143 PMCID: PMC10741598 DOI: 10.3390/brainsci13121695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 12/01/2023] [Accepted: 12/06/2023] [Indexed: 12/24/2023] Open
Abstract
In the field of minimally invasive neurosurgery, microscopic transsphenoidal surgery (MTS) and endoscopic transsphenoidal surgery (ETS) have been widely accepted as a safe approach for pituitary lesions and, more recently, their indications have been extended to lesions at various skull base regions. It is mandatory during transsphenoidal surgery (TS) to identify key anatomical landmarks in the sphenoid sinus and distinguish them from the lesion. Over the years, many intraoperative tools have been introduced to improve the neuronavigation systems aiming to achieve safer and more accurate neurosurgical interventions. However, traditional neuronavigation systems may lose the accuracy of real-time location due to the discrepancy between the actual surgical field and the preoperative 2D images. To deal with this, augmented reality (AR)-a new sophisticated 3D technology that superimposes computer-generated virtual objects onto the user's view of the real world-has been considered a promising tool. Particularly, in the field of TS, AR can minimize the anatomic challenges of traditional endoscopic or microscopic surgery, aiding in surgical training, preoperative planning and intra-operative orientation. The aim of this systematic review is to analyze the potential future role of augmented reality, both in endoscopic and microscopic transsphenoidal surgeries.
Collapse
Affiliation(s)
- Benedetta Maria Campisi
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| | - Roberta Costanzo
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| | - Vincenzo Gulino
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| | - Chiara Avallone
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| | - Manfredi Noto
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| | - Lapo Bonosi
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| | - Lara Brunasso
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| | - Gianluca Scalia
- Neurosurgery Unit, Department of Head and Neck Surgery, Garibaldi Hospital, 95122 Catania, Italy;
| | - Domenico Gerardo Iacopino
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| | - Rosario Maugeri
- Neurosurgical Clinic, AOUP “Paolo Giaccone”, Post Graduate Residency Program in Neurologic Surgery, Department of Biomedicine Neurosciences and Advanced Diagnostics, School of Medicine, University of Palermo, 90127 Palermo, Italy; (B.M.C.); (V.G.); (C.A.); (M.N.); (L.B.); (L.B.); (D.G.I.); (R.M.)
| |
Collapse
|
10
|
Liang X, Lin S, Liu F, Schreiber D, Yip M. ORRN: An ODE-Based Recursive Registration Network for Deformable Respiratory Motion Estimation With Lung 4DCT Images. IEEE Trans Biomed Eng 2023; 70:3265-3276. [PMID: 37279120 DOI: 10.1109/tbme.2023.3280463] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data. Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images. However, in 4D (3D + time) medical data, organ motion, such as respiratory motion and heart beating, can not be effectively modeled by pair-wise methods as they were optimized for image pairs but did not consider the organ motion patterns necessary when considering 4D data. METHODS This article presents ORRN, an Ordinary Differential Equations (ODE)-based recursive image registration network. Our network learns to estimate time-varying voxel velocities for an ODE that models deformation in 4D image data. It adopts a recursive registration strategy to progressively estimate a deformation field through ODE integration of voxel velocities. RESULTS We evaluate the proposed method on two publicly available lung 4DCT datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the extreme inhale image for 3D+t deformation tracking and 2) registering extreme exhale to inhale phase images. Our method outperforms other learning-based methods in both tasks, producing the smallest Target Registration Error of 1.24 mm and 1.26 mm, respectively. Additionally, it produces less than 0.001% unrealistic image folding, and the computation speed is less than 1 s for each CT volume. CONCLUSION ORRN demonstrates promising registration accuracy, deformation plausibility, and computation efficiency on group-wise and pair-wise registration tasks. SIGNIFICANCE It has significant implications in enabling fast and accurate respiratory motion estimation for treatment planning in radiation therapy or robot motion planning in thoracic needle insertion.
Collapse
|
11
|
Stevens RRF, Hazelaar C, Fast MF, Mandija S, Grehn M, Cvek J, Knybel L, Dvorak P, Pruvot E, Verhoeff JJC, Blanck O, van Elmpt W. Stereotactic Arrhythmia Radioablation (STAR): Assessment of cardiac and respiratory heart motion in ventricular tachycardia patients - A STOPSTORM.eu consortium review. Radiother Oncol 2023; 188:109844. [PMID: 37543057 DOI: 10.1016/j.radonc.2023.109844] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 07/10/2023] [Accepted: 07/28/2023] [Indexed: 08/07/2023]
Abstract
AIM To identify the optimal STereotactic Arrhythmia Radioablation (STAR) strategy for individual patients, cardiorespiratory motion of the target volume in combination with different treatment methodologies needs to be evaluated. However, an authoritative overview of the amount of cardiorespiratory motion in ventricular tachycardia (VT) patients is missing. METHODS In this STOPSTORM consortium study, we performed a literature review to gain insight into cardiorespiratory motion of target volumes for STAR. Motion data and target volumes were extracted and summarized. RESULTS Out of the 232 studies screened, 56 provided data on cardiorespiratory motion, of which 8 provided motion amplitudes in VT patients (n = 94) and 10 described (cardiac/cardiorespiratory) internal target volumes (ITVs) obtained in VT patients (n = 59). Average cardiac motion of target volumes was < 5 mm in all directions, with maximum values of 8.0, 5.2 and 6.5 mm in Superior-Inferior (SI), Left-Right (LR), Anterior-Posterior (AP) direction, respectively. Cardiorespiratory motion of cardiac (sub)structures showed average motion between 5-8 mm in the SI direction, whereas, LR and AP motions were comparable to the cardiac motion of the target volumes. Cardiorespiratory ITVs were on average 120-284% of the gross target volume. Healthy subjects showed average cardiorespiratory motion of 10-17 mm in SI and 2.4-7 mm in the AP direction. CONCLUSION This review suggests that despite growing numbers of patients being treated, detailed data on cardiorespiratory motion for STAR is still limited. Moreover, data comparison between studies is difficult due to inconsistency in parameters reported. Cardiorespiratory motion is highly patient-specific even under motion-compensation techniques. Therefore, individual motion management strategies during imaging, planning, and treatment for STAR are highly recommended.
Collapse
Affiliation(s)
- Raoul R F Stevens
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, the Netherlands.
| | - Colien Hazelaar
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Martin F Fast
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Stefano Mandija
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Melanie Grehn
- Department of Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel, Germany
| | - Jakub Cvek
- Department of Oncology, University Hospital and Faculty of Medicine, Ostrava, Czech Republic
| | - Lukas Knybel
- Department of Oncology, University Hospital and Faculty of Medicine, Ostrava, Czech Republic
| | - Pavel Dvorak
- Department of Oncology, University Hospital and Faculty of Medicine, Ostrava, Czech Republic
| | - Etienne Pruvot
- Heart and Vessel Department, Service of Cardiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Joost J C Verhoeff
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Oliver Blanck
- Department of Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel, Germany
| | - Wouter van Elmpt
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, the Netherlands
| |
Collapse
|
12
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
13
|
Baum ZMC, Hu Y, Barratt DC. Meta-Learning Initializations for Interactive Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:823-833. [PMID: 36322502 PMCID: PMC7614355 DOI: 10.1109/tmi.2022.3218147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We present a meta-learning framework for interactive medical image registration. Our proposed framework comprises three components: a learning-based medical image registration algorithm, a form of user interaction that refines registration at inference, and a meta-learning protocol that learns a rapidly adaptable network initialization. This paper describes a specific algorithm that implements the registration, interaction and meta-learning protocol for our exemplar clinical application: registration of magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled transrectal ultrasound (TRUS) images. Our approach obtains comparable registration error (4.26 mm) to the best-performing non-interactive learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the data, and occurring in real-time during acquisition. Applying sparsely sampled data to non-interactive methods yields higher registration errors (6.26 mm), demonstrating the effectiveness of interactive MR-TRUS registration, which may be applied intraoperatively given the real-time nature of the adaptation process.
Collapse
Affiliation(s)
- Zachary M. C. Baum
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| | - Yipeng Hu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| | - Dean C. Barratt
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| |
Collapse
|
14
|
Pérez de Frutos J, Pedersen A, Pelanis E, Bouget D, Survarachakan S, Langø T, Elle OJ, Lindseth F. Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation. PLoS One 2023; 18:e0282110. [PMID: 36827289 PMCID: PMC9956065 DOI: 10.1371/journal.pone.0282110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/08/2023] [Indexed: 02/25/2023] Open
Abstract
PURPOSE This study aims to explore training strategies to improve convolutional neural network-based image-to-image deformable registration for abdominal imaging. METHODS Different training strategies, loss functions, and transfer learning schemes were considered. Furthermore, an augmentation layer which generates artificial training image pairs on-the-fly was proposed, in addition to a loss layer that enables dynamic loss weighting. RESULTS Guiding registration using segmentations in the training step proved beneficial for deep-learning-based image registration. Finetuning the pretrained model from the brain MRI dataset to the abdominal CT dataset further improved performance on the latter application, removing the need for a large dataset to yield satisfactory performance. Dynamic loss weighting also marginally improved performance, all without impacting inference runtime. CONCLUSION Using simple concepts, we improved the performance of a commonly used deep image registration architecture, VoxelMorph. In future work, our framework, DDMR, should be validated on different datasets to further assess its value.
Collapse
Affiliation(s)
| | - André Pedersen
- Department of Health Research, SINTEF, Trondheim, Norway
- Department of Clinical and Molecular Medicine, Norwegian University of Technology (NTNU), Trondheim, Norway
- Clinic of Surgery, St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway
| | | | - David Bouget
- Department of Health Research, SINTEF, Trondheim, Norway
| | | | - Thomas Langø
- Department of Health Research, SINTEF, Trondheim, Norway
- Research Department, Future Operating Room, St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway
| | - Ole-Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
| | - Frank Lindseth
- Department of Computer Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| |
Collapse
|
15
|
Zhu X, Ding M, Zhang X. Free form deformation and symmetry constraint‐based multi‐modal brain image registration using generative adversarial nets. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Affiliation(s)
- Xingxing Zhu
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Mingyue Ding
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Xuming Zhang
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| |
Collapse
|
16
|
McDonald BA, Zachiu C, Christodouleas J, Naser MA, Ruschin M, Sonke JJ, Thorwarth D, Létourneau D, Tyagi N, Tadic T, Yang J, Li XA, Bernchou U, Hyer DE, Snyder JE, Bubula-Rehm E, Fuller CD, Brock KK. Dose accumulation for MR-guided adaptive radiotherapy: From practical considerations to state-of-the-art clinical implementation. Front Oncol 2023; 12:1086258. [PMID: 36776378 PMCID: PMC9909539 DOI: 10.3389/fonc.2022.1086258] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 12/21/2022] [Indexed: 01/27/2023] Open
Abstract
MRI-linear accelerator (MR-linac) devices have been introduced into clinical practice in recent years and have enabled MR-guided adaptive radiation therapy (MRgART). However, by accounting for anatomical changes throughout radiation therapy (RT) and delivering different treatment plans at each fraction, adaptive radiation therapy (ART) highlights several challenges in terms of calculating the total delivered dose. Dose accumulation strategies-which typically involve deformable image registration between planning images, deformable dose mapping, and voxel-wise dose summation-can be employed for ART to estimate the delivered dose. In MRgART, plan adaptation on MRI instead of CT necessitates additional considerations in the dose accumulation process because MRI pixel values do not contain the quantitative information used for dose calculation. In this review, we discuss considerations for dose accumulation specific to MRgART and in relation to current MR-linac clinical workflows. We present a general dose accumulation framework for MRgART and discuss relevant quality assurance criteria. Finally, we highlight the clinical importance of dose accumulation in the ART era as well as the possible ways in which dose accumulation can transform clinical practice and improve our ability to deliver personalized RT.
Collapse
Affiliation(s)
- Brigid A. McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Cornel Zachiu
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, Netherlands
| | | | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Mark Ruschin
- Department of Radiation Oncology, University of Toronto, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Jan-Jakob Sonke
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tuebingen, Tuebingen, Germany
| | - Daniel Létourneau
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, United States
| | - Tony Tadic
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - X. Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Uffe Bernchou
- Laboratory of Radiation Physics, Department of Oncology, Odense University Hospital, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Daniel E. Hyer
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA, United States
| | - Jeffrey E. Snyder
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA, United States
| | | | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kristy K. Brock
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| |
Collapse
|
17
|
Reattachable fiducial skin marker for automatic multimodality registration. Int J Comput Assist Radiol Surg 2022; 17:2141-2150. [PMID: 35604488 PMCID: PMC9515062 DOI: 10.1007/s11548-022-02639-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 04/08/2022] [Indexed: 11/05/2022]
Abstract
Abstract
Purpose
Fusing image information has become increasingly important for optimal diagnosis and treatment of the patient. Despite intensive research towards markerless registration approaches, fiducial marker-based methods remain the default choice for a wide range of applications in clinical practice. However, as especially non-invasive markers cannot be positioned reproducibly in the same pose on the patient, pre-interventional imaging has to be performed immediately before the intervention for fiducial marker-based registrations.
Methods
We propose a new non-invasive, reattachable fiducial skin marker concept for multi-modal registration approaches including the use of electromagnetic or optical tracking technologies. We furthermore describe a robust, automatic fiducial marker localization algorithm for computed tomography (CT) and magnetic resonance imaging (MRI) images. Localization of the new fiducial marker has been assessed for different marker configurations using both CT and MRI. Furthermore, we applied the marker in an abdominal phantom study. For this, we attached the marker at three poses to the phantom, registered ten segmented targets of the phantom’s CT image to live ultrasound images and determined the target registration error (TRE) for each target and each marker pose.
Results
Reattachment of the marker was possible with a mean precision of 0.02 mm ± 0.01 mm. Our algorithm successfully localized the marker automatically in all ($$n=201$$
n
=
201
) evaluated CT/MRI images. Depending on the marker pose, the mean ($$n=10$$
n
=
10
) TRE of the abdominal phantom study ranged from 1.51 ± 0.75 mm to 4.65 ± 1.22 mm.
Conclusions
The non-invasive, reattachable skin marker concept allows reproducible positioning of the marker and automatic localization in different imaging modalities. The low TREs indicate the potential applicability of the marker concept for clinical interventions, such as the puncture of abdominal lesions, where current image-based registration approaches still lack robustness and existing marker-based methods are often impractical.
Collapse
|
18
|
Clinical study of skill assessment based on time sequential measurement changes. Sci Rep 2022; 12:6638. [PMID: 35459268 PMCID: PMC9033839 DOI: 10.1038/s41598-022-10502-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 04/07/2022] [Indexed: 11/08/2022] Open
Abstract
Endoscopic sinus surgery is a common procedure for chronic sinusitis; however, complications have been reported in some cases. Improving surgical outcomes requires an improvement in a surgeon's skills. In this study, we used surgical workflow analysis to automatically extract "errors," indicating whether there was a large difference in the comparative evaluation of procedures performed by experts and residents. First, we quantified surgical features using surgical log data, which contained surgical instrument information (e.g., tip position) and time stamp. Second, we created a surgical process model (SPM), which represents the temporal transition of the surgical features. Finally, we identified technical issues by creating an expert standard SPM and comparing it to the novice SPM. We verified the performance of our methods by using the clinical data of 39 patients. In total, 303 portions were detected as an error, and they were classified into six categories. Three risky operations were overlooked, and there were 11 overdetected errors. We noted that most errors detected by our method involved dangers. The implementation of our methods of automatic improvement points detection may be advantageous. Our methods may help reduce the time for reviewing and improving the surgical technique efficiently.
Collapse
|
19
|
de Geer A, Brouwer de Koning S, van Alphen M, van der Mierden S, Zuur C, van Leeuwen F, Loeve A, van Veen R, Karakullukcu M. Registration methods for surgical navigation of the mandible: a systematic review. Int J Oral Maxillofac Surg 2022; 51:1318-1329. [PMID: 35165005 DOI: 10.1016/j.ijom.2022.01.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/18/2021] [Accepted: 01/26/2022] [Indexed: 12/20/2022]
|
20
|
Vizcarra JC, Burlingame EA, Hug CB, Goltsev Y, White BS, Tyson DR, Sokolov A. A community-based approach to image analysis of cells, tissues and tumors. Comput Med Imaging Graph 2022; 95:102013. [PMID: 34864359 PMCID: PMC8761177 DOI: 10.1016/j.compmedimag.2021.102013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 11/09/2021] [Accepted: 11/09/2021] [Indexed: 01/03/2023]
Abstract
Emerging multiplexed imaging platforms provide an unprecedented view of an increasing number of molecular markers at subcellular resolution and the dynamic evolution of tumor cellular composition. As such, they are capable of elucidating cell-to-cell interactions within the tumor microenvironment that impact clinical outcome and therapeutic response. However, the rapid development of these platforms has far outpaced the computational methods for processing and analyzing the data they generate. While being technologically disparate, all imaging assays share many computational requirements for post-collection data processing. As such, our Image Analysis Working Group (IAWG), composed of researchers in the Cancer Systems Biology Consortium (CSBC) and the Physical Sciences - Oncology Network (PS-ON), convened a workshop on "Computational Challenges Shared by Diverse Imaging Platforms" to characterize these common issues and a follow-up hackathon to implement solutions for a selected subset of them. Here, we delineate these areas that reflect major axes of research within the field, including image registration, segmentation of cells and subcellular structures, and identification of cell types from their morphology. We further describe the logistical organization of these events, believing our lessons learned can aid others in uniting the imaging community around self-identified topics of mutual interest, in designing and implementing operational procedures to address those topics and in mitigating issues inherent in image analysis (e.g., sharing exemplar images of large datasets and disseminating baseline solutions to hackathon challenges through open-source code repositories).
Collapse
Affiliation(s)
- Juan Carlos Vizcarra
- Department of Biomedical Engineering, Georgia Institute of Technology & Emory University, Atlanta, GA, USA
| | - Erik A Burlingame
- Computational Biology Program, Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
| | - Clemens B Hug
- Laboratory of Systems Pharmacology, Harvard Program in Therapeutic Science, Boston, MA, USA
| | - Yury Goltsev
- Department of Microbiology & Immunology, Stanford University School of Medicine, Stanford, CA, USA
| | - Brian S White
- Computational Oncology, Sage Bionetworks, Seattle, WA, USA
| | - Darren R Tyson
- Department of Biochemistry, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Artem Sokolov
- Laboratory of Systems Pharmacology, Harvard Program in Therapeutic Science, Boston, MA, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
21
|
Dge T, L S, J G, B AN, M G, Pw K. Marker-free registration for intraoperative navigation using the transverse palatal rugae. Int J Med Robot 2021; 18:e2362. [PMID: 34972255 DOI: 10.1002/rcs.2362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 12/24/2021] [Accepted: 12/29/2021] [Indexed: 11/10/2022]
Abstract
BACKGROUND Registration is most important in navigation-assisted-surgery including the matching between the coordinates of the actual patient space and the medical image. Marker-based techniques mostly include marker application with subsequent radiography. In the edentulous patient, maker-free methods are generally less accurate and reproducible. This new method of a marker-free registration uses the transverse palatal rugae as registration structures. METHODS (1)Segmentation of bone and hard palatal mucosa from initial 3D imaging (DICOM), (2)Maxillary intraoral-scan (IOS) with transfer to the 3D imaging using an Iterative-Closest-Point-Algorithm (ICP), (3)Marking digital registration points with holes within IOS-stl, (4)Transformation of the spatially aligned IOS-stl to LabelMap and storage in DICOM (IOS-DICOM), (5)Alignment of DICOM and IOS-DICOM, (6)Controlled positioning of digital reg.points and clinical correlation. RESULTS Fiducial localization error (0.48mm) and target registration error (0.65mm) is comparable to those of tooth-supported registration methods. CONCLUSION This methodology is a promising approach to marker-free navigation-assisted-surgery in the edentulous patient. This approach of marker-free registration for navigation-assisted-surgery could improve the treatment in edentulous patients avoiding additional imaging and invasive marker insertion. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Thiem Dge
- Department of Oral and Maxillofacial Surgery, Facial Plastic Surgery, University Medical Centre Mainz, Augustusplatz 2, 55131, Mainz, Germany
| | - Seifert L
- Department of Oral and Maxillofacial surgery, Facial Plastic Surgery, Goethe University Medical Centre Frankfurt, 60590, Frankfurt, Germany
| | - Graef J
- Department of Oral and Maxillofacial Surgery, Facial Plastic Surgery, University Medical Centre Mainz, Augustusplatz 2, 55131, Mainz, Germany
| | - Al-Nawas B
- Department of Oral and Maxillofacial Surgery, Facial Plastic Surgery, University Medical Centre Mainz, Augustusplatz 2, 55131, Mainz, Germany
| | - Gielisch M
- Department of Oral and Maxillofacial Surgery, Facial Plastic Surgery, University Medical Centre Mainz, Augustusplatz 2, 55131, Mainz, Germany
| | - Kämmerer Pw
- Department of Oral and Maxillofacial Surgery, Facial Plastic Surgery, University Medical Centre Mainz, Augustusplatz 2, 55131, Mainz, Germany
| |
Collapse
|
22
|
Wang S, Celebi ME, Zhang YD, Yu X, Lu S, Yao X, Zhou Q, Miguel MG, Tian Y, Gorriz JM, Tyukin I. Advances in Data Preprocessing for Biomedical Data Fusion: An Overview of the Methods, Challenges, and Prospects. INFORMATION FUSION 2021; 76:376-421. [DOI: 10.1016/j.inffus.2021.07.001] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
23
|
Registration-free workflow for electromagnetic and optical navigation in orbital and craniofacial surgery. Sci Rep 2021; 11:18080. [PMID: 34508161 PMCID: PMC8433137 DOI: 10.1038/s41598-021-97706-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Accepted: 08/13/2021] [Indexed: 11/25/2022] Open
Abstract
The accuracy of intra-operative navigation is largely dependent on the intra-operative registration procedure. Next to accuracy, important factors to consider for the registration procedure are invasiveness, time consumption, logistical demands, user-dependency, compatibility and radiation exposure. In this study, a workflow is presented that eliminates the need for a registration procedure altogether: registration-free navigation. In the workflow, the maxillary dental model is fused to the pre-operative imaging data using commercially available virtual planning software. A virtual Dynamic Reference Frame on a splint is designed on the patient’s fused maxillary dentition: during surgery, the splint containing the reference frame is positioned on the patient’s dentition. This alleviates the need for any registration procedure, since the position of the reference frame is known from the design. The accuracy of the workflow was evaluated in a cadaver set-up, and compared to bone-anchored fiducial, virtual splint and surface-based registration. The results showed that accuracy of the workflow was greatly dependent on tracking technique used: the workflow was the most accurate with electromagnetic tracking, but the least accurate with optical tracking. Although this method offers a time-efficient, non-invasive, radiation-free automatic alternative for registration, clinical implementation is hampered by the unexplained differences in accuracy between tracking techniques.
Collapse
|
24
|
Barber SR. New Navigation Approaches for Endoscopic Lateral Skull Base Surgery. Otolaryngol Clin North Am 2021; 54:175-187. [PMID: 33243374 DOI: 10.1016/j.otc.2020.09.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Image-guided navigation is well established for surgery of the brain and anterior skull base. Although navigation workstations have been used widely by neurosurgeons and rhinologists for decades, utilization in the lateral skull base (LSB) has been less due to stricter requirements for overall accuracy less than 1 mm in this region. Endoscopic approaches to the LSB facilitate minimally invasive surgeries with less morbidity, yet there are risks of injury to critical structures. With improvements in technology over the years, image-guided navigation for endoscopic LSB surgery can reduce operative time, optimize exposure for surgical corridors, and increase safety in difficult cases.
Collapse
Affiliation(s)
- Samuel R Barber
- Department of Otolaryngology-Head and Neck Surgery, University of Arizona College of Medicine, 1501 North Campbell Avenue, Tucson, AZ 85724, USA.
| |
Collapse
|
25
|
A hybrid feature-based patient-to-image registration method for robot-assisted long bone osteotomy. Int J Comput Assist Radiol Surg 2021; 16:1507-1516. [PMID: 34176070 DOI: 10.1007/s11548-021-02439-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 06/17/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE The purpose of this study is to provide a simple, feasible and effective patient-to-image registration method for robot-assisted long bone osteotomy, which has rarely been systematically reported. The practical requirement is to meet the accuracy of 1 mm or even higher without bone-implanted markers. METHODS A hybrid feature-based registration method termed CR-RAMSICP is proposed. Point-based coarse registration (CR) is accomplished relying on the optical retro-reflective markers attached to the tracked rigid body fixed out of the bone. In surface-based fine registration, an improved iterative closest point (ICP) algorithm based on the range-adaptive matching strategy (termed RAMSICP) is presented to cope with the robust precise matching between the asymmetric patient and image point clouds, which avoids converging to a local minimum. RESULTS A series of registration experiments based on the isolated porcine iliums are carried out. The results illustrate that CR-RAMSICP not only significantly outperforms CR and CR-ICP in the accuracy and reproducibility, but also exhibits better robustness to the CR errors and less sensitiveness to the distribution and number of fiducial points located in the patient point cloud than CR-ICP. CONCLUSION The proposed registration method CR-RAMSICP can stably satisfy the desired registration accuracy without the use of bone-implanted markers like fiducial screws. Besides, the RAMSICP algorithm used in fine registration is convenient for programming because any complex metrics or models are not involved.
Collapse
|
26
|
CHEN XINRONG, YANG FUMING, ZHANG ZIQUN, BAI BAODAN, GUO LEI. ROBUST SURFACE-MATCHING REGISTRATION BASED ON THE STRUCTURE INFORMATION FOR IMAGE-GUIDED NEUROSURGERY SYSTEM. J MECH MED BIOL 2021. [DOI: 10.1142/s0219519421400091] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Image-to-patient space registration is to make the accurate alignment between the actual operating space and the image space. Although the image-to-patient space registration using paired-point is used in some image-guided neurosurgery systems, the current paired-point registration method has some drawbacks and usually cannot achieve the best registration result. Therefore, surface-matching registration is proposed to solve this problem. This paper proposes a surface-matching method that accomplishes image-to-patient space registration automatically. We represent the surface point clouds by the Gaussian Mixture Model (GMM), which can smoothly approximate the probability density distribution of an arbitrary point set. We also use mutual information as the similarity measure between the point clouds and take into account the structure information of the points. To analyze the registration error, we introduce a method for the estimation of Target Registration Error (TRE) by generating simulated data. In the experiments, we used the point sets of the cranium surface and the model of the human head determined by a CT and laser scanner. The TRE was less than 2[Formula: see text]mm, and the TRE had better accuracy in the front and the posterior region. Compared to the Iterative Closest Point algorithm, the surface registration based on GMM and the structure information of the points proved superior in registration robustness and accurate implementation of image-to-patient registration.
Collapse
Affiliation(s)
- XINRONG CHEN
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, P. R. China
- Shanghai Key Laboratory of Medical Image, Computing and Computer Assisted Intervention, Shanghai 200032, P. R. China
| | - FUMING YANG
- Huashan Hospital, Fudan University, Shanghai 200040, P. R. China
| | - ZIQUN ZHANG
- Information Center, Fudan University, Shanghai 200433, P. R. China
| | - BAODAN BAI
- School of Medical Instruments, Shanghai University of Medicine & Health Science, Shanghai 201318, P. R. China
| | - LEI GUO
- School of Business Administration, Shanghai Lixin University of Accounting and Finance, Shanghai 201620, P. R. China
| |
Collapse
|
27
|
Virtual splint registration for electromagnetic and optical navigation in orbital and craniofacial surgery. Sci Rep 2021; 11:10406. [PMID: 34001966 PMCID: PMC8128880 DOI: 10.1038/s41598-021-89897-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 04/06/2021] [Indexed: 11/08/2022] Open
Abstract
In intra-operative navigation, a registration procedure is performed to register the patient's position to the pre-operative imaging data. The registration process is the main factor that determines accuracy of the navigation feedback. In this study, a novel registration protocol for craniofacial surgery is presented, that utilizes a virtual splint with marker points. The accuracy of the proposed method was evaluated by two observers in five human cadaver heads, for optical and electromagnetic navigation, and compared to maxillary bone-anchored fiducial registration (optical and electromagnetic) and surface-based registration (electromagnetic). The results showed minimal differences in accuracy compared to bone-anchored fiducials at the level of the infra-orbital rim. Both point-based techniques had lower error estimates at the infraorbital rim than surface-based registration, but surface-based registration had the lowest loss of accuracy over target distance. An advantage over existing point-based registration methods (bone-anchored fiducials, existing splint techniques) is that radiological imaging does not need to be repeated, since the need for physical fiducials to be present in the image volume is eradicated. Other advantages include reduction of invasiveness compared to bone-achnored fiducials and a possible reduction of human error in the registration process.
Collapse
|
28
|
Li W, Fan J, Li S, Tian Z, Zheng Z, Ai D, Song H, Yang J. Calibrating 3D Scanner in the Coordinate System of Optical Tracker for Image-To-Patient Registration. Front Neurorobot 2021; 15:636772. [PMID: 34054454 PMCID: PMC8160243 DOI: 10.3389/fnbot.2021.636772] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Three-dimensional scanners have been widely applied in image-guided surgery (IGS) given its potential to solve the image-to-patient registration problem. How to perform a reliable calibration between a 3D scanner and an external tracker is especially important for these applications. This study proposes a novel method for calibrating the extrinsic parameters of a 3D scanner in the coordinate system of an optical tracker. We bound an optical marker to a 3D scanner and designed a specified 3D benchmark for calibration. We then proposed a two-step calibration method based on the pointset registration technique and nonlinear optimization algorithm to obtain the extrinsic matrix of the 3D scanner. We applied repeat scan registration error (RSRE) as the cost function in the optimization process. Subsequently, we evaluated the performance of the proposed method on a recaptured verification dataset through RSRE and Chamfer distance (CD). In comparison with the calibration method based on 2D checkerboard, the proposed method achieved a lower RSRE (1.73 mm vs. 2.10, 1.94, and 1.83 mm) and CD (2.83 mm vs. 3.98, 3.46, and 3.17 mm). We also constructed a surgical navigation system to further explore the application of the tracked 3D scanner in image-to-patient registration. We conducted a phantom study to verify the accuracy of the proposed method and analyze the relationship between the calibration accuracy and the target registration error (TRE). The proposed scanner-based image-to-patient registration method was also compared with the fiducial-based method, and TRE and operation time (OT) were used to evaluate the registration results. The proposed registration method achieved an improved registration efficiency (50.72 ± 6.04 vs. 212.97 ± 15.91 s in the head phantom study). Although the TRE of the proposed registration method met the clinical requirements, its accuracy was lower than that of the fiducial-based registration method (1.79 ± 0.17 mm vs. 0.92 ± 0.16 mm in the head phantom study). We summarized and analyzed the limitations of the scanner-based image-to-patient registration method and discussed its possible development.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
29
|
Vagdargi P, Sheth N, Sisniega A, Uneri A, De Silva T, Osgood GM, Siewerdsen JH. Drill-mounted video guidance for orthopaedic trauma surgery. J Med Imaging (Bellingham) 2021; 8:015002. [PMID: 33604409 DOI: 10.1117/1.jmi.8.1.015002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 01/19/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Percutaneous fracture fixation is a challenging procedure that requires accurate interpretation of fluoroscopic images to insert guidewires through narrow bone corridors. We present a guidance system with a video camera mounted onboard the surgical drill to achieve real-time augmentation of the drill trajectory in fluoroscopy and/or CT. Approach: The camera was mounted on the drill and calibrated with respect to the drill axis. Markers identifiable in both video and fluoroscopy are placed about the surgical field and co-registered by feature correspondences. If available, a preoperative CT can also be co-registered by 3D-2D image registration. Real-time guidance is achieved by virtual overlay of the registered drill axis on fluoroscopy or in CT. Performance was evaluated in terms of target registration error (TRE), conformance within clinically relevant pelvic bone corridors, and runtime. Results: Registration of the drill axis to fluoroscopy demonstrated median TRE of 0.9 mm and 2.0 deg when solved with two views (e.g., anteroposterior and lateral) and five markers visible in both video and fluoroscopy-more than sufficient to provide Kirschner wire (K-wire) conformance within common pelvic bone corridors. Registration accuracy was reduced when solved with a single fluoroscopic view ( TRE = 3.4 mm and 2.7 deg) but was also sufficient for K-wire conformance within pelvic bone corridors. Registration was robust with as few as four markers visible within the field of view. Runtime of the initial implementation allowed fluoroscopy overlay and/or 3D CT navigation with freehand manipulation of the drill up to 10 frames / s . Conclusions: A drill-mounted video guidance system was developed to assist with K-wire placement. Overall workflow is compatible with fluoroscopically guided orthopaedic trauma surgery and does not require markers to be placed in preoperative CT. The initial prototype demonstrates accuracy and runtime that could improve the accuracy of K-wire placement, motivating future work for translation to clinical studies.
Collapse
Affiliation(s)
- Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Niral Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Alejandro Sisniega
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Greg M Osgood
- Johns Hopkins Medicine, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States.,Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
30
|
Denis de Senneville B, Manjón JV, Coupé P. RegQCNET: Deep quality control for image-to-template brain MRI affine registration. Phys Med Biol 2020; 65:225022. [PMID: 32906089 DOI: 10.1088/1361-6560/abb6be] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Affine registration of one or several brain image(s) onto a common reference space is a necessary prerequisite for many image processing tasks, such as brain segmentation or functional analysis. Manual assessment of registration quality is a tedious and time-consuming task, especially in studies comprising a large amount of data. Automated and reliable quality control (QC) becomes mandatory. Moreover, the computation time of the QC must be also compatible with the processing of massive datasets. Therefore, automated deep neural network approaches have emerged as a method of choice to automatically assess registration quality. In the current study, a compact 3D convolutional neural network, referred to as RegQCNET, is introduced to quantitatively predict the amplitude of an affine registration mismatch between a registered image and a reference template. This quantitative estimation of registration error is expressed using the metric unit system. Therefore, a meaningful task-specific threshold can be manually or automatically defined in order to distinguish between usable and non-usable images. The robustness of the proposed RegQCNET is first analyzed on lifespan brain images undergoing various simulated spatial transformations and intensity variations between training and testing. Secondly, the potential of RegQCNET to classify images as usable or non-usable is evaluated using both manual and automatic thresholds. During our experiments, automatic thresholds are estimated using several computer-assisted classification models (logistic regression, support vector machine, Naive Bayes and random forest) through cross-validation. To this end we use an expert's visual QC estimated on a lifespan cohort of 3953 brains. Finally, the RegQCNET accuracy is compared to usual image features such as image correlation coefficient and mutual information. The results show that the proposed deep learning QC is robust, fast and accurate at estimating affine registration error in the processing pipeline.
Collapse
|
31
|
Kuo HC, Lovelock MM, Li G, Ballangrud Å, Wolthuis B, Della Biancia C, Hunt MA, Berry SL. A phantom study to evaluate three different registration platform of 3D/3D, 2D/3D, and 3D surface match with 6D alignment for precise image-guided radiotherapy. J Appl Clin Med Phys 2020; 21:188-196. [PMID: 33184966 PMCID: PMC7769400 DOI: 10.1002/acm2.13086] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Revised: 08/09/2020] [Accepted: 10/09/2020] [Indexed: 12/03/2022] Open
Abstract
Purpose To evaluate two three‐dimensional (3D)/3D registration platforms, one two‐dimensional (2D)/3D registration method, and one 3D surface registration method (3DS). These three technologies are available to perform six‐dimensional (6D) registrations for image‐guided radiotherapy treatment. Methods Fiducial markers were asymmetrically placed on the surfaces of an anthropomorphic head phantom (n = 13) and a body phantom (n = 8), respectively. The point match (PM) solution to the six‐dimensional (6D) transformation between the two image sets [planning computed tomography (CT) and cone beam CT (CBCT)] was determined through least‐square fitting of the fiducial positions using singular value decomposition (SVD). The transformation result from SVD was verified and was used as the gold standard to evaluate the 6D accuracy of 3D/3D registration in Varian’s platform (3D3DV), 3D/3D and 2D/3D registration in the BrainLab ExacTrac system (3D3DE and 2D3D), as well as 3DS in the AlignRT system. Image registration accuracy from each method was quantitatively evaluated by root mean square of target registration error (rmsTRE) on fiducial markers and by isocenter registration error (IRE). The Wilcoxon signed‐rank test was utilized to compare the difference of each registration method with PM. A P < 0.05 was considered significant. Results rmsTRE was in the range of 0.4 mm/0.7 mm (cranial/body), 0.5 mm/1 mm, 1.0 mm/1.5 mm, and 1.0 mm/1.2 mm for PM, 3D3D, 2D3D, and 3DS, respectively. Comparing to PM, the mean errors of IRE were 0.3 mm/1 mm for 3D3D, 0.5 mm/1.4 mm for 2D3D, and 1.6 mm/1.35 mm for 3DS for the cranial and body phantoms respectively. Both of 3D3D and 2D3D methods differed significantly in the roll direction as compared to the PM method for the cranial phantom. The 3DS method was significantly different from the PM method in all three translation dimensions for both the cranial (P = 0.003–P = 0.03) and body (P < 0.001–P = 0.008) phantoms. Conclusion 3D3D using CBCT had the best image registration accuracy among all the tested methods. 2D3D method was slightly inferior to the 3D3D method but was still acceptable as a treatment position verification device. 3DS is comparable to 2D3D technique and could be a substitute for X‐ray or CBCT for pretreatment verification for treatment of anatomical sites that are rigid.
Collapse
Affiliation(s)
- Hsiang-Chi Kuo
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, NY, USA.,Radiation Oncology Department, Norwalk Hospital, Norwalk, CT, USA
| | - Michael M Lovelock
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Guang Li
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Åse Ballangrud
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Brian Wolthuis
- Radiation Oncology Department, Norwalk Hospital, Norwalk, CT, USA
| | - Cesar Della Biancia
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Margie A Hunt
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Sean L Berry
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
32
|
A novel extraoral registration method for a dynamic navigation system guiding zygomatic implant placement in patients with maxillectomy defects. Int J Oral Maxillofac Surg 2020; 50:116-120. [PMID: 32499080 DOI: 10.1016/j.ijom.2020.03.018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 01/17/2020] [Accepted: 03/05/2020] [Indexed: 11/20/2022]
Abstract
Zygomatic implants (ZIs) are used for the oral rehabilitation of patients with maxillectomy defects as an alternative to extensive bone grafting surgeries. New technologies such as computer-assisted navigation systems can improve the accuracy and safety of ZI placement. The intraoral anchorage of fiducial markers necessary for navigation registration is not possible in the case of a severe maxillary defect and lack of residual bone. This technical note presents a novel extraoral registration method for a dynamic navigation system guiding ZI placement in patients with maxillectomy defects. Titanium microscrews were inserted in the mastoid process, supraorbital ridge, and posterior zygomatic arch as registration markers. The mean fiducial registration error (FRE) was 0.53 ± 0.20 and the deviations between the planned and placed ZIs were 1.56 ± 0.54 mm (entry point), 1.87 ± 0.63 mm (exit point), and 2.52 ± 0.84° (angulation). The study results indicate that the placement of fiducial markers at extraoral sites can be used as a registration technique to overcome anatomical limitations in patients after maxillectomy, with a clinically acceptable registration accuracy.
Collapse
|
33
|
Optimization Model for the Distribution of Fiducial Markers in Liver Intervention. J Med Syst 2020; 44:83. [PMID: 32152742 DOI: 10.1007/s10916-020-01548-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 02/18/2020] [Indexed: 10/24/2022]
Abstract
The distribution of fiducial markers is one of the main factors affected the accuracy of optical navigation system. However, many studies have been focused on improving the fiducial registration accuracy or the target registration accuracy, but few solutions involve optimization model for the distribution of fiducial markers. In this paper, we propose an optimization model for the distribution of fiducial markers to improve the optical navigation accuracy. The strategy of optimization model is reducing the distribution from three dimensional to two dimensional to obtain the 2D optimal distribution by using optimization algorithm in terms of the marker number and the expectation equation of target registration error (TRE), and then extend the 2D optimal distribution in two dimensional to three dimensional to calculate the optimal distribution according to the distance parameter and the expectation equation of TRE. The results of the experiments show that the averaged TRE for the human phantom is approximately 1.00 mm by applying the proposed optimization model, and the averaged TRE for the abdominal phantom is 0.59 mm. The experimental results of liver simulator model and ex-vivo porcine liver model show that the proposed optimization model can be effectively applied in liver intervention.
Collapse
|
34
|
CLARK T, BURCA G, BOARDMAN R, BLUMENSATH T. Correlative X‐ray and neutron tomography of root systems using cadmium fiducial markers. J Microsc 2020; 277:170-178. [DOI: 10.1111/jmi.12831] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 09/10/2019] [Accepted: 09/17/2019] [Indexed: 11/29/2022]
Affiliation(s)
- T. CLARK
- Faculty of Engineering and Physical SciencesUniversity of Southampton UK
- STFC, Rutherford Appleton LaboratoryISIS Facility Harwell UK
| | - G. BURCA
- STFC, Rutherford Appleton LaboratoryISIS Facility Harwell UK
| | - R. BOARDMAN
- μ‐VIS X‐ray Imaging CentreUniversity of Southampton UK
| | - T. BLUMENSATH
- ISVR Signal Processing and Control GroupUniversity of Southampton UK
| |
Collapse
|
35
|
|
36
|
Zachiu C, de Senneville BD, Raaymakers BW, Ries M. Biomechanical quality assurance criteria for deformable image registration algorithms used in radiotherapy guidance. ACTA ACUST UNITED AC 2020; 65:015006. [DOI: 10.1088/1361-6560/ab501d] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
37
|
Augmenting GPS with Geolocated Fiducials to Improve Accuracy for Mobile Robot Applications. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app10010146] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent decades Global Positioning Systems (GPS) have become a ubiquitous tool to support navigation. Traditional GPS has an error in the order of 10–15 m, which is adequate for many applications (e.g., vehicle navigation) but for many robotics applications lacks required accuracy. In this paper we describe a technique, FAGPS (Fiducial Augmented Global Positioning System) to periodically use fiducial markers to lower the GPS drift, and hence for a small time-period have a more accurate GPS determination. We describe results from simulations and from field testing in open-sky environments where horizontal GPS accuracy was improved from a twice the distance root mean square (2DRMS) error of 5.5 m to 2.99 m for a period of up-to 30 min.
Collapse
|
38
|
Yang F, Ding M, Zhang X. Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighborhood Descriptor. SENSORS 2019; 19:s19214675. [PMID: 31661828 PMCID: PMC6864520 DOI: 10.3390/s19214675] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 10/05/2019] [Accepted: 10/23/2019] [Indexed: 11/22/2022]
Abstract
The non-rigid multi-modal three-dimensional (3D) medical image registration is highly challenging due to the difficulty in the construction of similarity measure and the solution of non-rigid transformation parameters. A novel structural representation based registration method is proposed to address these problems. Firstly, an improved modality independent neighborhood descriptor (MIND) that is based on the foveated nonlocal self-similarity is designed for the effective structural representations of 3D medical images to transform multi-modal image registration into mono-modal one. The sum of absolute differences between structural representations is computed as the similarity measure. Subsequently, the foveated MIND based spatial constraint is introduced into the Markov random field (MRF) optimization to reduce the number of transformation parameters and restrict the calculation of the energy function in the image region involving non-rigid deformation. Finally, the accurate and efficient 3D medical image registration is realized by minimizing the similarity measure based MRF energy function. Extensive experiments on 3D positron emission tomography (PET), computed tomography (CT), T1, T2, and (proton density) PD weighted magnetic resonance (MR) images with synthetic deformation demonstrate that the proposed method has higher computational efficiency and registration accuracy in terms of target registration error (TRE) than the registration methods that are based on the hybrid L-BFGS-B and cat swarm optimization (HLCSO), the sum of squared differences on entropy images, the MIND, and the self-similarity context (SSC) descriptor, except that it provides slightly bigger TRE than the HLCSO for CT-PET image registration. Experiments on real MR and ultrasound images with unknown deformation have also be done to demonstrate the practicality and superiority of the proposed method.
Collapse
Affiliation(s)
- Feng Yang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
- School of Computer and Electronics and Information, Guangxi University, Nanning 530004, China.
| | - Mingyue Ding
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Xuming Zhang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
39
|
Lollis SS, Fan X, Evans L, Olson JD, Paulsen KD, Roberts DW, Mirza SK, Ji S. Use of Stereovision for Intraoperative Coregistration of a Spinal Surgical Field: A Human Feasibility Study. Oper Neurosurg (Hagerstown) 2019; 14:29-35. [PMID: 28658939 DOI: 10.1093/ons/opx132] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 06/14/2017] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND The use of image guidance during spinal surgery has been limited by several anatomic factors such as intervertebral segment motion and ineffective spine immobilization. In its current form, the surgical field is coregistered with a preoperative computed tomography (CT), often obtained in a different spinal confirmation, or with intraoperative cross-sectional imaging. Stereovision offers an alternative method of registration. OBJECTIVE To demonstrate the feasibility of stereovision-mediated coregistration of a human spinal surgical field using a proof-of-principle study, and to provide preliminary assessments of the technique's accuracy. METHODS A total of 9 subjects undergoing image-guided pedicle screw placement also underwent stereovision-mediated coregistration with preoperative CT imaging. Stereoscopic images were acquired using a tracked, calibrated stereoscopic camera system mounted on an operating microscope. Images were processed, reconstructed, and segmented in a semi-automated manner. A multistart registration of the reconstructed spinal surface with preoperative CT was performed. Registration accuracy, measured as surface-to-surface distance error, was compared between stereovision registration and a standard registration. RESULTS The mean surface reconstruction error of the stereovision-acquired surface was 2.20 ± 0.89 mm. Intraoperative coregistration with stereovision was performed with a mean error of 1.48 ± 0.35 mm compared to 2.03 ± 0.28 mm using a standard point-based registration method. The average computational time for registration with stereovision was 95 ± 46 s (range 33-184 s) vs 10to 20 min for standard point-based registration. CONCLUSION Semi-automated registration of a spinal surgical field using stereovision is possible with accuracy that is at least comparable to current landmark-based techniques.
Collapse
Affiliation(s)
- S Scott Lollis
- Division of Neurosurgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Xiaoyao Fan
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - Linton Evans
- Division of Neurosurgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Jonathan D Olson
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - Keith D Paulsen
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - David W Roberts
- Division of Neurosurgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire.,Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| | - Sohail K Mirza
- Department of Orthopedic Surgery, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Songbai Ji
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire
| |
Collapse
|
40
|
Sorriento A, Porfido MB, Mazzoleni S, Calvosa G, Tenucci M, Ciuti G, Dario P. Optical and Electromagnetic Tracking Systems for Biomedical Applications: A Critical Review on Potentialities and Limitations. IEEE Rev Biomed Eng 2019; 13:212-232. [PMID: 31484133 DOI: 10.1109/rbme.2019.2939091] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Optical and electromagnetic tracking systems represent the two main technologies integrated into commercially-available surgical navigators for computer-assisted image-guided surgery so far. Optical Tracking Systems (OTSs) work within the optical spectrum to track the position and orientation, i.e., pose of target surgical instruments. OTSs are characterized by high accuracy and robustness to environmental conditions. The main limitation of OTSs is the need of a direct line-of-sight between the optical markers and the camera sensor, rigidly fixed into the operating theatre. Electromagnetic Tracking Systems (EMTSs) use electromagnetic field generator to detect the pose of electromagnetic sensors. EMTSs do not require such a direct line-of-sight, however the presence of metal or ferromagnetic sources in the operating workspace can significantly affect the measurement accuracy. The aim of the proposed review is to provide a complete and detailed overview of optical and electromagnetic tracking systems, including working principles, source of error and validation protocols. Moreover, commercial and research-oriented solutions, as well as clinical applications, are described for both technologies. Finally, a critical comparative analysis of the state of the art which highlights the potentialities and the limitations of each tracking system for a medical use is provided.
Collapse
|
41
|
Talks BJ, Jolly K, Burton H, Koria H, Ahmed SK. Cone-Beam Computed Tomography Allows Accurate Registration to Surgical Navigation Systems: A Multidevice Phantom Study. Am J Rhinol Allergy 2019; 33:691-699. [DOI: 10.1177/1945892419861849] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Background Cone-beam computed tomography (CBCT) is a fast imaging technique with a substantially lower radiation dosage than conventional multidetector computed tomography (MDCT) for sinus imaging. Surgical navigation systems are increasingly being used in endoscopic sinus and skull base surgery, reducing perioperative morbidity. Objective To investigate CBCT as a low-radiation imaging modality for use in surgical navigation. Methods The required field of view was measured from the tip of the nose to the posterior clinoid process anteroposteriorly and the nasolabial angle to the roof of the frontal sinus superoinferiorly on 50 consecutive MDCT scans (male = 25; age = 17–85 years). A phantom head was manufactured by 3-dimensional printing and imaged using 3 CBCT scanners (Carestream, J Morita, and NewTom), a conventional MDCT scanner (Siemens), and highly accurate laser scanner (FARO). The phantom head was registered to 3 surgical navigation systems (Brainlab, Stryker, and Medtronic) using scans from each system. Results The required field of view (mean ± standard deviation) was measured as 107 ± 7.6 mm anteroposteriorly and 90.3 ± 9.6 mm superoinferiorly. Image error deviations from the laser scan (median ± interquartile range) were comparable for MDCT (0.19 ± 0.09 mm) and CBCT (CBCT 1: 0.15 ± 0.11 mm; CBCT 2: 0.33 ± 0.18 mm; and CBCT 3: 0.13 ± 0.13 mm) scanners. Fiducial registration error and target registration error were also comparable for MDCT- and CBCT-based navigation. Conclusion CBCT is a low-radiation preoperative imaging modality suitable for use in surgical navigation.
Collapse
Affiliation(s)
- Benjamin J. Talks
- Medical School, College of Medical and Dental Sciences, University of Birmingham, Edgbaston, Birmingham, UK
| | - Karan Jolly
- Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham, Edgbaston, Birmingham, UK
| | | | - Hitesh Koria
- Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham, Edgbaston, Birmingham, UK
| | - Shahzada K. Ahmed
- Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham, Edgbaston, Birmingham, UK
| |
Collapse
|
42
|
A novel Tungsten-based fiducial marker for multi-modal brain imaging. J Neurosci Methods 2019; 323:22-31. [DOI: 10.1016/j.jneumeth.2019.04.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 04/08/2019] [Accepted: 04/30/2019] [Indexed: 01/21/2023]
|
43
|
Joe H, Pahk KJ, Park S, Kim H. Development of a subject-specific guide system for Low-Intensity Focused Ultrasound (LIFU) brain stimulation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:105-110. [PMID: 31200898 DOI: 10.1016/j.cmpb.2019.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/05/2019] [Accepted: 05/05/2019] [Indexed: 06/09/2023]
Abstract
Low-Intensity Focused Ultrasound (LIFU) has recently been considered as a promising neuromodulation technique because it can noninvasively stimulate the brain with a high spatial resolution. As spatial resolution is improved, there is a growing demand for developing more accurate and convenient guide systems. Therefore, in the present study, we have developed and prototyped a 3D printed wearable subject-specific helmet for LIFU stimulation that is guaranteed to be accurate. The spatial relationship between the target position and the full-width at half-maximum (FWHM) of acoustic pressure of the transducer, i.e. focal volume, was compared using the conventional image-guided navigation system. According to the distribution of positional errors, the target position was located well within the focal volume.
Collapse
Affiliation(s)
- Haeyoung Joe
- Center for Bionics, Biomedical Research Institute, Korea Institute Science and Technology (KIST), 5, Hwarangro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea; Human-Machine Systems Laboratory, Dept. of Mechanical Engineering, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Ki Joo Pahk
- Center for Bionics, Biomedical Research Institute, Korea Institute Science and Technology (KIST), 5, Hwarangro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea
| | - Shinsuk Park
- Human-Machine Systems Laboratory, Dept. of Mechanical Engineering, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea.
| | - Hyungmin Kim
- Center for Bionics, Biomedical Research Institute, Korea Institute Science and Technology (KIST), 5, Hwarangro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea.
| |
Collapse
|
44
|
Regional-surface-based registration for image-guided neurosurgery: effects of scan modes on registration accuracy. Int J Comput Assist Radiol Surg 2019; 14:1303-1315. [PMID: 31055765 DOI: 10.1007/s11548-019-01990-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 04/24/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE The conventional surface-based method only registers the facial zone with preoperative point cloud, resulting in low accuracy away from the facial area. Acquiring a point cloud of the entire head for registration can improve registration accuracy in all parts of the head. However, it takes a long time to collect a point cloud of the entire head. It may be more practical to selectively scan part of the head to ensure high registration accuracy in the surgical area of interest. In this study, we investigate the effects of different scan regions on registration errors in different target areas when using a surface-based registration method. METHODS We first evaluated the correlation between the laser scan resolution and registration accuracy to determine an appropriate scan resolution. Then, with the appropriate resolution, we explored the effects of scan modes on registration error in computer simulation experiments, phantom experiments and two clinical cases. The scan modes were designed based on different combinations of five zones of the head surface, i.e., the sphenoid-frontal zone, parietal zone, left temporal zone, right temporal zone and occipital zone. In the phantom experiment, a handheld scanner was used to acquire a point cloud of the head. A head model containing several tumors was designed, enabling us to calculate the target registration errors deep in the brain to evaluate the effect of regional-surface-based registration. RESULT The optimal scan modes for tumors located in the sphenoid-frontal, parietal and temporal areas are mode 4 (i.e., simultaneously scanning the sphenoid-frontal zone and the temporal zone), mode 4 and mode 6 (i.e., simultaneously scanning the sphenoid-frontal zone, the temporal zone and the parietal zone), respectively. For the tumor located in the occipital area, no modes were able to achieve reliable accuracy. CONCLUSION The results show that selecting an appropriate scan resolution and scan mode can achieve reliable accuracy for use in sphenoid-frontal, parietal and temporal area surgeries while effectively reducing the operation time.
Collapse
|
45
|
Guo N, Yang B, Wang Y, Liu H, Hu L, Wang T. New Calibrator with Points Distributed Conical Helically for Online Calibration of C-Arm. SENSORS (BASEL, SWITZERLAND) 2019; 19:E1989. [PMID: 31035379 PMCID: PMC6539996 DOI: 10.3390/s19091989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 04/21/2019] [Accepted: 04/23/2019] [Indexed: 11/17/2022]
Abstract
To improve the accuracy of calibration of C-arm, and overcome the space limitation in surgery, we proposed a new calibrator for online calibration of C-arm. After the image rectification by a polynomial fitting-based global correction method, the C-arm was assumed as an ideal pinhole model. The relationships between two kinds of spatial calibration errors and the distribution of fiducial points were studied: the performance of FRE (Fiducial Registration Error) and TRE (Target Registration Error) were not consistent, but both were best at the 12 marked points; the TRE decreased with the increase of the uniformity of calibration points distribution, and with the decrease of the distance between the target point and the center of calibration points. A calibrator with 12 fiducial points conical helically distributed, which could be placed on the knee, was an attractive option. A total of 10 experiments on C-arm calibration accuracy were conducted and the mean value of mapping error was 0.41 mm. We designed an ACL reconstruction navigation system and carried out specimen experiments on 4 pairs of dry femur and tibia. The mean accuracy of navigation system was 0.85 mm, which is important to the tunnel positioning for ACL reconstruction.
Collapse
Affiliation(s)
- Na Guo
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China.
| | - Biao Yang
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China.
| | - Yuhan Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China.
| | - Hongsheng Liu
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China.
| | - Lei Hu
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China.
| | - Tianmiao Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China.
| |
Collapse
|
46
|
Bao N, Li A, Zhao W, Cui Z, Tian X, Yue Y, Li H, Qian W. Automated fiducial marker detection and fiducial point localization in CT images for lung biopsy image-guided surgery systems. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:417-429. [PMID: 30958321 DOI: 10.3233/xst-180464] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In the lung biopsy image-guided surgery systems, the fiducial markers are used for point-based registration of the patient space to the CT image space. Fiducial marker detection and fiducial point localization in CT images have great influence on the accuracy of registration and guidance. This study proposes a fiducial marker detection approach based on the features of marker image slice sequences and a fiducial point localization approach according to marker projection images, without depending on the priori-knowledge of the marker default parameters provided by the manufacturers. The accuracy of our method was validated based on a CT image dataset of 24 patients. The experimental results showed that all 144 markers of 24 patients were correctly detected, and the fiducial points were localized with the average error of 0.35 mm. In addition, the localization accuracy of the proposed method was improved by an average of 12.5% compared with the accuracy of the previous method using the marker default parameters provided by the manufacturers. Thus, the study demonstrated that the proposed detection and localization methods are accurate and robust, which is quite encouraging to meet the requirement of future clinical applications in the image guided lung biopsy and surgery systems.
Collapse
Affiliation(s)
- Nan Bao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shen Yang, Liao Ning, China
| | - Ang Li
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shen Yang, Liao Ning, China
| | - Wei Zhao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shen Yang, Liao Ning, China
| | - Zhiming Cui
- Department of Computer Science, The University of Hong Kong, Hong Kong, China
| | - Xinhua Tian
- Department of Radiology, The Second Hospital of Jilin University, Chang Chun, Ji Lin, China
| | - Yong Yue
- Department of Radiology, ShengJing Hospital of China Medical University, Shen Yang, Liao Ning, China
| | - Hong Li
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shen Yang, Liao Ning, China
| | - Wei Qian
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shen Yang, Liao Ning, China
- Department of Electrical and Computer Engineering, University of Texas at El Paso, TX, USA
| |
Collapse
|
47
|
Noninvasive Registration Strategies and Advanced Image Guidance Technology for Submillimeter Surgical Navigation Accuracy in the Lateral Skull Base. Otol Neurotol 2018; 39:1326-1335. [DOI: 10.1097/mao.0000000000001993] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
48
|
Lafitte L, Zachiu C, Kerkmeijer LGW, Ries M, Denis de Senneville B. Accelerating multi-modal image registration using a supervoxel-based variational framework. ACTA ACUST UNITED AC 2018; 63:235009. [DOI: 10.1088/1361-6560/aaebc2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
49
|
Zachiu C, de Senneville BD, Moonen CTW, Raaymakers BW, Ries M. Anatomically plausible models and quality assurance criteria for online mono- and multi-modal medical image registration. Phys Med Biol 2018; 63:155016. [PMID: 29972147 DOI: 10.1088/1361-6560/aad109] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Medical imaging is currently employed in the diagnosis, planning, delivery and response monitoring of cancer treatments. Due to physiological motion and/or treatment response, the shape and location of the pathology and organs-at-risk may change over time. Establishing their location within the acquired images is therefore paramount for an accurate treatment delivery and monitoring. A feasible solution for tracking anatomical changes during an image-guided cancer treatment is provided by image registration algorithms. Such methods are, however, often built upon elements originating from the computer vision/graphics domain. Since the original design of such elements did not take into consideration the material properties of particular biological tissues, the anatomical plausibility of the estimated deformations may not be guaranteed. In the current work we adapt two existing variational registration algorithms, namely Horn-Schunck and EVolution, to online soft tissue tracking. This is achieved by enforcing an incompressibility constraint on the estimated deformations during the registration process. The existing and the modified registration methods were comparatively tested against several quality assurance criteria on abdominal in vivo MR and CT data. These criteria included: the Dice similarity coefficient (DSC), the Jaccard index, the target registration error (TRE) and three additional criteria evaluating the anatomical plausibility of the estimated deformations. Results demonstrated that both the original and the modified registration methods have similar registration capabilities in high-contrast areas, with DSC and Jaccard index values predominantly in the 0.8-0.9 range and an average TRE of 1.6-2.0 mm. In contrast-devoid regions of the liver and kidneys, however, the three additional quality assurance criteria have indicated a considerable improvement of the anatomical plausibility of the deformations estimated by the incompressibility-constrained methods. Moreover, the proposed registration models maintain the potential of the original methods for online image-based guidance of cancer treatments.
Collapse
Affiliation(s)
- C Zachiu
- Department of Radiotherapy, UMC Utrecht, Heidelberglaan 100, 3508 GA, Utrecht, Netherlands
| | | | | | | | | |
Collapse
|
50
|
Niu K, Homminga J, Sluiter VI, Sprengers A, Verdonschot N. Feasibility of A-mode ultrasound based intraoperative registration in computer-aided orthopedic surgery: A simulation and experimental study. PLoS One 2018; 13:e0199136. [PMID: 29897987 PMCID: PMC5999105 DOI: 10.1371/journal.pone.0199136] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 06/01/2018] [Indexed: 11/18/2022] Open
Abstract
PURPOSE A fast and accurate intraoperative registration method is important for Computer-Aided Orthopedic Surgery (CAOS). A-mode ultrasound (US) is able to acquire bone surface data in a non-invasive manner. To utilize A-mode US in CAOS, a suitable registration algorithm is necessary with a small number of registration points and the presence of measurement errors. Therefore, we investigated the effects of (1) the number of registration points and (2) the Ultrasound Point Localization Error (UPLE) on the overall registration accuracy. METHODS We proposed a new registration method (ICP-PS), including the Iterative Closest Points (ICP) algorithm and a Perturbation Search algorithm. This method enables to avoid getting stuck in the local minimum of ICP iterations and to find the adjacent global minimum. This registration method was subsequently validated in a numerical simulation and a cadaveric experiment using a 3D-tracked A-mode US system. RESULTS The results showed that ICP-PS outperformed the standard ICP algorithm. The registration accuracy improved with the addition of ultrasound registration points. In the numerical simulation, for 25 sample points with zero UPLE, the averaged registration error of ICP-PS reached 0.25 mm, while 1.71 mm for ICP, decreasing by 85.38%. In the cadaver experiment, using 25 registration points, ICP-PS achieved an RMSE of 2.81 mm relative to 5.84 mm for the ICP, decreasing by 51.88%. CONCLUSIONS The simulation approach provided a well-defined framework for estimating the necessary number of ultrasound registration points and acceptable level of UPLE for a given required level of accuracy for intraoperative registration in CAOS. ICP-PS method is suitable for A-mode US based intraoperative registration. This study would facilitate the application of A-mode US probe in registering the point cloud to a known shape model, which also has the potential for accurately estimating bone position and orientation for skeletal motion tracking and surgical navigation.
Collapse
Affiliation(s)
- Kenan Niu
- Laboratory of Biomechanical Engineering, Faculty of Engineering Technology, MIRA Institute, University of Twente, Enschede, the Netherlands
- * E-mail:
| | - Jasper Homminga
- Laboratory of Biomechanical Engineering, Faculty of Engineering Technology, MIRA Institute, University of Twente, Enschede, the Netherlands
| | - Victor I. Sluiter
- Laboratory of Biomechanical Engineering, Faculty of Engineering Technology, MIRA Institute, University of Twente, Enschede, the Netherlands
| | - André Sprengers
- Orthopaedic Research Lab, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Nico Verdonschot
- Laboratory of Biomechanical Engineering, Faculty of Engineering Technology, MIRA Institute, University of Twente, Enschede, the Netherlands
- Orthopaedic Research Lab, Radboud University Medical Center, Nijmegen, the Netherlands
| |
Collapse
|