1
|
Wu X, Sánchez CA, Lloyd JE, Borgard H, Fels S, Paydarfar JA, Halter RJ. Estimating tongue deformation during laryngoscopy using a hybrid FEM-multibody model and intraoperative tracking - a cadaver study. Comput Methods Biomech Biomed Engin 2025; 28:739-749. [PMID: 38193213 PMCID: PMC11231054 DOI: 10.1080/10255842.2023.2301672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/06/2023] [Accepted: 12/31/2023] [Indexed: 01/10/2024]
Abstract
Throat tumour margin control remains difficult due to the tight, enclosed space of the oral and throat regions and the tissue deformation resulting from placement of retractors and scopes during surgery. Intraoperative imaging can help with better localization but is hindered by non-image-compatible surgical instruments, cost, and unavailability. We propose a novel method of using instrument tracking and FEM-multibody modelling to simulate soft tissue deformation in the intraoperative setting, without requiring intraoperative imaging, to improve surgical guidance accuracy. We report our first empirical study, based on four trials of a cadaveric head specimen with full neck anatomy, yields a mean TLE of 10.8 ± 5.5 mm, demonstrating methodological feasibility.
Collapse
Affiliation(s)
- Xiaotian Wu
- Gordon Center for Medical Imaging, Massachusetts General
Hospital and Harvard Medical School, Boston, MA, USA
- Thayer School of Engineering, Dartmouth College, Hanover,
NH, USA
| | - C. Antonio Sánchez
- Department of Electrical and Computer Engineering,
University of British Columbia, Vancouver, Canada
| | - John E. Lloyd
- Department of Electrical and Computer Engineering,
University of British Columbia, Vancouver, Canada
| | - Heather Borgard
- Department of Electrical and Computer Engineering,
University of British Columbia, Vancouver, Canada
| | - Sidney Fels
- Department of Electrical and Computer Engineering,
University of British Columbia, Vancouver, Canada
| | - Joseph A. Paydarfar
- Section of Otolaryngology, Dartmouth-Hitchcock Medical
Center, Lebanon, NH, USA
- Geisel School of Medicine, Dartmouth College, Hanover, NH,
USA
| | - Ryan J. Halter
- Thayer School of Engineering, Dartmouth College, Hanover,
NH, USA
- Geisel School of Medicine, Dartmouth College, Hanover, NH,
USA
| |
Collapse
|
2
|
Taleb A, Guigou C, Leclerc S, Lalande A, Bozorg Grayeli A. Image-to-Patient Registration in Computer-Assisted Surgery of Head and Neck: State-of-the-Art, Perspectives, and Challenges. J Clin Med 2023; 12:5398. [PMID: 37629441 PMCID: PMC10455300 DOI: 10.3390/jcm12165398] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023] Open
Abstract
Today, image-guided systems play a significant role in improving the outcome of diagnostic and therapeutic interventions. They provide crucial anatomical information during the procedure to decrease the size and the extent of the approach, to reduce intraoperative complications, and to increase accuracy, repeatability, and safety. Image-to-patient registration is the first step in image-guided procedures. It establishes a correspondence between the patient's preoperative imaging and the intraoperative data. When it comes to the head-and-neck region, the presence of many sensitive structures such as the central nervous system or the neurosensory organs requires a millimetric precision. This review allows evaluating the characteristics and the performances of different registration methods in the head-and-neck region used in the operation room from the perspectives of accuracy, invasiveness, and processing times. Our work led to the conclusion that invasive marker-based methods are still considered as the gold standard of image-to-patient registration. The surface-based methods are recommended for faster procedures and applied on the surface tissues especially around the eyes. In the near future, computer vision technology is expected to enhance these systems by reducing human errors and cognitive load in the operating room.
Collapse
Affiliation(s)
- Ali Taleb
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Caroline Guigou
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| | - Sarah Leclerc
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Alain Lalande
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Medical Imaging Department, University Hospital of Dijon, 21000 Dijon, France
| | - Alexis Bozorg Grayeli
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| |
Collapse
|
3
|
Zou J, Gao B, Song Y, Qin J. A review of deep learning-based deformable medical image registration. Front Oncol 2022; 12:1047215. [PMID: 36568171 PMCID: PMC9768226 DOI: 10.3389/fonc.2022.1047215] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 11/08/2022] [Indexed: 12/12/2022] Open
Abstract
The alignment of images through deformable image registration is vital to clinical applications (e.g., atlas creation, image fusion, and tumor targeting in image-guided navigation systems) and is still a challenging problem. Recent progress in the field of deep learning has significantly advanced the performance of medical image registration. In this review, we present a comprehensive survey on deep learning-based deformable medical image registration methods. These methods are classified into five categories: Deep Iterative Methods, Supervised Methods, Unsupervised Methods, Weakly Supervised Methods, and Latest Methods. A detailed review of each category is provided with discussions about contributions, tasks, and inadequacies. We also provide statistical analysis for the selected papers from the point of view of image modality, the region of interest (ROI), evaluation metrics, and method categories. In addition, we summarize 33 publicly available datasets that are used for benchmarking the registration algorithms. Finally, the remaining challenges, future directions, and potential trends are discussed in our review.
Collapse
Affiliation(s)
- Jing Zou
- Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR, China
| | | | | | | |
Collapse
|
4
|
A personalized image-guided intervention system for peripheral lung cancer on patient-specific respiratory motion model. Int J Comput Assist Radiol Surg 2022; 17:1751-1764. [PMID: 35639202 DOI: 10.1007/s11548-022-02676-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 05/06/2022] [Indexed: 11/27/2022]
Abstract
PURPOSE Due to respiratory motion, precise tracking of lung nodule movement is a persistent challenge for guiding percutaneous lung biopsy during image-guided intervention. We developed an automated image-guided system incorporating effective and robust tracking algorithms to address this challenge. Accurate lung motion prediction and personalized image-guided intervention are the key technological contributions of this work. METHODS A patient-specific respiratory motion model is developed to predict pulmonary movements of individual patients. It is based on the relation between the artificial 4D CT and corresponding positions tracked by position sensors attached on the chest using an electromagnetic (EM) tracking system. The 4D CT image of the thorax during breathing is calculated through deformable registration of two 3D CT scans acquired at inspiratory and expiratory breath-hold. The robustness and accuracy of the image-guided intervention system were assessed on a static thorax phantom under different clinical parametric combinations. RESULTS Real 4D CT images of ten patients were used to evaluate the accuracy of the respiratory motion model. The mean error of the model in different breathing phases was 1.59 ± 0.66 mm. Using a static thorax phantom, we achieved an average targeting accuracy of 3.18 ± 1.2 mm across 50 independent tests with different intervention parameters. The positive results demonstrate the robustness and accuracy of our system for personalized lung cancer intervention. CONCLUSIONS The proposed system integrates a patient-specific respiratory motion compensation model to reduce the effect of respiratory motion during percutaneous lung biopsy and help interventional radiologists target the lesion efficiently. Our preclinical studies indicate that the image-guided system has the ability to accurately predict and track lung nodules of individual patients and has the potential for use in the diagnosis and treatment of early stage lung cancer.
Collapse
|
5
|
Bessen SY, Wu X, Sramek MT, Shi Y, Pastel D, Halter R, Paydarfar JA. Image-guided surgery in otolaryngology: A review of current applications and future directions in head and neck surgery. Head Neck 2021; 43:2534-2553. [PMID: 34032338 DOI: 10.1002/hed.26743] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 02/20/2021] [Accepted: 05/04/2021] [Indexed: 02/06/2023] Open
Abstract
Image-guided surgery (IGS) has become a widely adopted technology in otolaryngology. Since its introduction nearly three decades ago, IGS technology has developed rapidly and improved real-time intraoperative visualization for a diverse array of clinical indications. As usability, accessibility, and clinical experiences with IGS increase, its potential applications as an adjunct in many surgical procedures continue to expand. Here, we describe the basic components of IGS and review both the current state and future directions of IGS in otolaryngology, with attention to current challenges to its application in surgery of the nonrigid upper aerodigestive tract.
Collapse
Affiliation(s)
- Sarah Y Bessen
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Xiaotian Wu
- Massachussetts General Hospital, Boston, Massachusetts, USA
| | - Michael T Sramek
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Yuan Shi
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire, USA
| | - David Pastel
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA.,Department of Otolaryngology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA.,Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Ryan Halter
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA.,Thayer School of Engineering at Dartmouth, Hanover, New Hampshire, USA
| | - Joseph A Paydarfar
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA.,Thayer School of Engineering at Dartmouth, Hanover, New Hampshire, USA.,Department of Otolaryngology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| |
Collapse
|
6
|
Rose AS, Kim H, Fuchs H, Frahm JM. Development of augmented-reality applications in otolaryngology-head and neck surgery. Laryngoscope 2019; 129 Suppl 3:S1-S11. [PMID: 31260127 DOI: 10.1002/lary.28098] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 05/16/2019] [Indexed: 11/11/2022]
Abstract
OBJECTIVES/HYPOTHESIS Augmented reality (AR) allows for the addition of transparent virtual images and video to one's view of a physical environment. Our objective was to develop a head-worn, AR system for accurate, intraoperative localization of pathology and normal anatomic landmarks during open head and neck surgery. STUDY DESIGN Face validity and case study. METHODS A protocol was developed for the creation of three-dimensional (3D) virtual models based on computed tomography scans. Using the HoloLens AR platform, a novel system of registration and tracking was developed. Accuracy was determined in relation to actual physical landmarks. A face validity study was then performed in which otolaryngologists were asked to evaluate the technology and perform a simulated surgical task using AR image guidance. A case study highlighting the potential usefulness of the technology is also presented. RESULTS An AR system was developed for intraoperative 3D visualization and localization. The average error in measurement of accuracy was 2.47 ± 0.46 millimeters (1.99, 3.30). The face validity study supports the potential of this system to improve safety and efficiency in open head and neck surgical procedures. CONCLUSIONS An AR system for accurate localization of pathology and normal anatomic landmarks of the head and neck is feasible with current technology. A face validity study reveals the potential value of the system in intraoperative image guidance. This application of AR, among others in the field of otolaryngology-head and neck surgery, promises to improve surgical efficiency and patient safety in the operating room. LEVEL OF EVIDENCE 2b Laryngoscope, 129:S1-S11, 2019.
Collapse
Affiliation(s)
- Austin S Rose
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Hyounghun Kim
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Henry Fuchs
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Jan-Michael Frahm
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| |
Collapse
|
7
|
Paydarfar JA, Wu X, Halter RJ. Initial experience with image-guided surgical navigation in transoral surgery. Head Neck 2018; 41:E1-E10. [PMID: 30556235 DOI: 10.1002/hed.25380] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Revised: 05/08/2018] [Accepted: 05/28/2018] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND Surgical navigation using image guidance may improve the safety and efficacy of transoral surgery (TOS); however, preoperative imaging cannot be accurately registered to the intraoperative state due to deformations resulting from placement of the laryngoscope or retractor. This proof of concept study explores feasibility and registration accuracy of surgical navigation for TOS by utilizing intraoperative imaging. METHODS Four patients undergoing TOS were recruited. Suspension laryngoscopy was performed with a CT-compatible laryngoscope. An intraoperative contrast enhanced CT scan was obtained and registered to fiducials placed on the neck, face, and laryngoscope. RESULTS All patients were successfully scanned and registered. Registration accuracy within the pharynx and larynx was 1 mm or less. Target registration was confirmed by localizing endoscopic and surface structures to the CT images. Successful tracking was performed in all 4 patients. CONCLUSION For surgical navigation during TOS, although a high level of registration accuracy can be achieved by utilizing intraoperative imaging, significant limitations of the existing technology have been identified. These limitations, as well as areas for future investigation, are discussed.
Collapse
Affiliation(s)
- Joseph A Paydarfar
- Section of Otolaryngology, Audiology, and Maxillofacial Surgery, Department of Surgery, Dartmouth-Hitchcock Medical Center, Geisel School of Medicine, Lebanon, New Hampshire
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Xiaotian Wu
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Ryan J Halter
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
- Dartmouth College Geisel School of Medicine, Department of Surgery, Hanover, New Hampshire
| |
Collapse
|
8
|
Abstract
For a variety of head and neck cancers, specifically those of the oropharynx, larynx, and hypopharynx, minimally invasive trans-oral approaches have been developed to reduce perioperative and long-term morbidity. However, in trans-oral surgical approaches anatomical deformation due to instrumentation, specifically placement of laryngoscopes and retractors, present a significant challenge for surgeons relying on preoperative imaging to resect tumors to negative margins. Quantifying the deformation due to instrumentation is needed in order to develop predictive models of operative deformation. In order to study this deformation, we used a CT/MR-compatible laryngoscopy system in concert with intraoperative CT imaging. 3D models of preoperative and intraoperative anatomy were generated. Mandible and hyoid displacements as well as tongue deformations were quantified for eight patients undergoing diagnostic laryngoscopy. Across patients, we found on average 1.3 cm of displacement for these anatomic structures due to laryngoscope insertion. On average, the maximum displacement for certain tongue regions exceeded 4 cm. The anatomical deformations quantified here can serve as a reference for describing how the upper aerodigestive tract anatomy changes during instrumentation and may be helpful in developing predictive models of intraoperative upper aerodigestive tract deformation.
Collapse
|
9
|
Ma AK, Daly M, Qiu J, Chan HHL, Goldstein DP, Irish JC, de Almeida JR. Intraoperative image guidance in transoral robotic surgery: A pilot study. Head Neck 2017; 39:1976-1983. [DOI: 10.1002/hed.24805] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 01/25/2017] [Accepted: 02/28/2017] [Indexed: 12/27/2022] Open
Affiliation(s)
- Andrew K. Ma
- Department of Otolaryngology - Head and Neck Surgery/Surgical Oncology; University of Toronto; Toronto Ontario Canada
| | - Michael Daly
- Guided Therapeutics (GTx) Program, Techna Institute; University Health Network; Toronto Ontario Canada
| | - Jimmy Qiu
- Guided Therapeutics (GTx) Program, Techna Institute; University Health Network; Toronto Ontario Canada
| | - Harley H. L. Chan
- Guided Therapeutics (GTx) Program, Techna Institute; University Health Network; Toronto Ontario Canada
| | - David P. Goldstein
- Department of Otolaryngology - Head and Neck Surgery/Surgical Oncology; University of Toronto; Toronto Ontario Canada
| | - Jonathan C. Irish
- Department of Otolaryngology - Head and Neck Surgery/Surgical Oncology; University of Toronto; Toronto Ontario Canada
- Guided Therapeutics (GTx) Program, Techna Institute; University Health Network; Toronto Ontario Canada
| | - John R. de Almeida
- Department of Otolaryngology - Head and Neck Surgery/Surgical Oncology; University of Toronto; Toronto Ontario Canada
| |
Collapse
|
10
|
Marinetto E, Uneri A, De Silva T, Reaungamornrat S, Zbijewski W, Sisniega A, Vogt S, Kleinszig G, Pascau J, Siewerdsen JH. Integration of free-hand 3D ultrasound and mobile C-arm cone-beam CT: Feasibility and characterization for real-time guidance of needle insertion. Comput Med Imaging Graph 2017; 58:13-22. [PMID: 28414927 DOI: 10.1016/j.compmedimag.2017.03.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2016] [Revised: 12/16/2016] [Accepted: 03/28/2017] [Indexed: 12/27/2022]
Abstract
This work presents development of an integrated ultrasound (US)-cone-beam CT (CBCT) system for image-guided needle interventions, combining a low-cost ultrasound system (Interson VC 7.5MHz, Pleasanton, CA) with a mobile C-arm for fluoroscopy and CBCT via use of a surgical tracker. Imaging performance of the ultrasound system was characterized in terms of depth-dependent contrast-to-noise ratio (CNR) and spatial resolution. US-CBCT system was evaluated in phantom studies simulating three needle-based procedures: drug delivery, tumor ablation, and lumbar puncture. Low-cost ultrasound provided flexibility but exhibited modest CNR and spatial resolution that is likely limited to fairly superficial applications within a ∼10cm depth of view. Needle tip localization demonstrated target registration error 2.1-3.0mm using fiducial-based registration.
Collapse
Affiliation(s)
- E Marinetto
- Departmento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain; Department of Biomedical Engineering, Johns Hopkins University, MD, USA
| | - A Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, MD, USA
| | - S Reaungamornrat
- Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, MD, USA
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, MD, USA
| | - S Vogt
- Siemens Healthcare XP Division, Erlangen, Germany
| | - G Kleinszig
- Siemens Healthcare XP Division, Erlangen, Germany
| | - J Pascau
- Departmento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, MD, USA; Department of Computer Science, Johns Hopkins University, Baltimore, USA.
| |
Collapse
|
11
|
Reaungamornrat S, De Silva T, Uneri A, Goerres J, Jacobson M, Ketcha M, Vogt S, Kleinszig G, Khanna AJ, Wolinsky JP, Prince JL, Siewerdsen JH. Performance evaluation of MIND demons deformable registration of MR and CT images in spinal interventions. Phys Med Biol 2016; 61:8276-8297. [PMID: 27811396 DOI: 10.1088/0031-9155/61/23/8276] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Accurate intraoperative localization of target anatomy and adjacent nervous and vascular tissue is essential to safe, effective surgery, and multimodality deformable registration can be used to identify such anatomy by fusing preoperative CT or MR images with intraoperative images. A deformable image registration method has been developed to estimate viscoelastic diffeomorphisms between preoperative MR and intraoperative CT using modality-independent neighborhood descriptors (MIND) and a Huber metric for robust registration. The method, called MIND Demons, optimizes a constrained symmetric energy functional incorporating priors on smoothness, geodesics, and invertibility by alternating between Gauss-Newton optimization and Tikhonov regularization in a multiresolution scheme. Registration performance was evaluated for the MIND Demons method with a symmetric energy formulation in comparison to an asymmetric form, and sensitivity to anisotropic MR voxel-size was analyzed in phantom experiments emulating image-guided spine-surgery in comparison to a free-form deformation (FFD) method using local mutual information (LMI). Performance was validated in a clinical study involving 15 patients undergoing intervention of the cervical, thoracic, and lumbar spine. The target registration error (TRE) for the symmetric MIND Demons formulation (1.3 ± 0.8 mm (median ± interquartile)) outperformed the asymmetric form (3.6 ± 4.4 mm). The method demonstrated fairly minor sensitivity to anisotropic MR voxel size, with median TRE ranging 1.3-2.9 mm for MR slice thickness ranging 0.9-9.9 mm, compared to TRE = 3.2-4.1 mm for LMI FFD over the same range. Evaluation in clinical data demonstrated sub-voxel TRE (<2 mm) in all fifteen cases with realistic deformations that preserved topology with sub-voxel invertibility (0.001 mm) and positive-determinant spatial Jacobians. The approach therefore appears robust against realistic anisotropic resolution characteristics in MR and yields registration accuracy suitable to application in image-guided spine-surgery.
Collapse
Affiliation(s)
- S Reaungamornrat
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
12
|
Liu WP, Richmon JD, Sorger JM, Azizian M, Taylor RH. Augmented reality and cone beam CT guidance for transoral robotic surgery. J Robot Surg 2015; 9:223-33. [PMID: 26531203 PMCID: PMC4634572 DOI: 10.1007/s11701-015-0520-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2015] [Accepted: 07/05/2015] [Indexed: 01/21/2023]
Abstract
In transoral robotic surgery preoperative image data do not reflect large deformations of the operative workspace from perioperative setup. To address this challenge, in this study we explore image guidance with cone beam computed tomographic angiography to guide the dissection of critical vascular landmarks and resection of base-of-tongue neoplasms with adequate margins for transoral robotic surgery. We identify critical vascular landmarks from perioperative c-arm imaging to augment the stereoscopic view of a da Vinci si robot in addition to incorporating visual feedback from relative tool positions. Experiments resecting base-of-tongue mock tumors were conducted on a series of ex vivo and in vivo animal models comparing the proposed workflow for video augmentation to standard non-augmented practice and alternative, fluoroscopy-based image guidance. Accurate identification of registered augmented critical anatomy during controlled arterial dissection and en bloc mock tumor resection was possible with the augmented reality system. The proposed image-guided robotic system also achieved improved resection ratios of mock tumor margins (1.00) when compared to control scenarios (0.0) and alternative methods of image guidance (0.58). The experimental results show the feasibility of the proposed workflow and advantages of cone beam computed tomography image guidance through video augmentation of the primary stereo endoscopy as compared to control and alternative navigation methods.
Collapse
Affiliation(s)
- Wen P Liu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| | - Jeremy D Richmon
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | | | | | - Russell H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
13
|
Wu D, Qin J, Guo X, Li S. Analysis of the difference in the course of the lingual arteries caused by tongue position change. Laryngoscope 2014; 125:762-6. [PMID: 25291559 DOI: 10.1002/lary.24959] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Revised: 09/06/2014] [Accepted: 09/15/2014] [Indexed: 11/06/2022]
Affiliation(s)
- Dahai Wu
- Department of Otolaryngology; General Hospital of Shenyang Military Area Command; Shenyang China
| | - Jie Qin
- Department of Otolaryngology; General Hospital of Shenyang Military Area Command; Shenyang China
| | - Xiaohong Guo
- Department of Otolaryngology; General Hospital of Shenyang Military Area Command; Shenyang China
| | - Shuhua Li
- Department of Otolaryngology; General Hospital of Shenyang Military Area Command; Shenyang China
| |
Collapse
|
14
|
Dang H, Wang AS, Sussman MS, Siewerdsen JH, Stayman JW. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images. Phys Med Biol 2014; 59:4799-826. [PMID: 25097144 PMCID: PMC4142353 DOI: 10.1088/0031-9155/59/17/4799] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
Collapse
Affiliation(s)
- H Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD 21205, USA
| | | | | | | | | |
Collapse
|
15
|
Cazoulat G, Simon A, Dumenil A, Gnep K, De Crevoisier R, Acosta-Tamayo O, Haigron P. Surface-constrained nonrigid registration for dose monitoring in prostate cancer radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1464-1474. [PMID: 24710827 PMCID: PMC5325876 DOI: 10.1109/tmi.2014.2314574] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper addresses the issue of cumulative dose estimation from cone beam computed tomography (CBCT) images in prostate cancer radiotherapy. It focuses on the dose received by the surfaces of the main organs at risk, namely the bladder and rectum. We have proposed both a surface-constrained dose accumulation approach and its extensive evaluation. Our approach relied on the nonrigid registration (NRR) of daily acquired CBCT images on the planning CT image. This proposed NRR method was based on a Demons-like algorithm, implemented in combination with mutual information metric. It allowed for different levels of geometrical constraints to be considered, ensuring a better point to point correspondence, especially when large deformations occurred, or in high dose gradient areas. The three following implementations: 1) full iconic NRR; 2) iconic NRR constrained with landmarks (LCNRR); 3) NRR constrained with full delineation of organs (DBNRR). To obtain reference data, we designed a numerical phantom based on finite-element modeling and image simulation. The methods were assessed on both the numerical phantom and real patient data in order to quantify uncertainties in terms of dose accumulation. The LCNRR method appeared to constitute a good compromise for dose monitoring in clinical practice.
Collapse
Affiliation(s)
- Guillaume Cazoulat
- LTSI, Laboratoire Traitement du Signal et de l'Image
Institut National de la Santé et de la Recherche Médicale - U1099Université de Rennes 1 - Campus Universitaire de Beaulieu - Bât 22 - 35042 Rennes
| | - Antoine Simon
- LTSI, Laboratoire Traitement du Signal et de l'Image
Institut National de la Santé et de la Recherche Médicale - U1099Université de Rennes 1 - Campus Universitaire de Beaulieu - Bât 22 - 35042 Rennes
| | - Aurelien Dumenil
- LTSI, Laboratoire Traitement du Signal et de l'Image
Institut National de la Santé et de la Recherche Médicale - U1099Université de Rennes 1 - Campus Universitaire de Beaulieu - Bât 22 - 35042 Rennes
| | - Khemara Gnep
- Centre Eugène Marquis
CRLCC Eugène Marquis - Avenue Bataille Flandres-Dunkerque 35042 RENNES CEDEX
| | - Renaud De Crevoisier
- LTSI, Laboratoire Traitement du Signal et de l'Image
Institut National de la Santé et de la Recherche Médicale - U1099Université de Rennes 1 - Campus Universitaire de Beaulieu - Bât 22 - 35042 Rennes
- Centre Eugène Marquis
CRLCC Eugène Marquis - Avenue Bataille Flandres-Dunkerque 35042 RENNES CEDEX
| | - Oascar Acosta-Tamayo
- LTSI, Laboratoire Traitement du Signal et de l'Image
Institut National de la Santé et de la Recherche Médicale - U1099Université de Rennes 1 - Campus Universitaire de Beaulieu - Bât 22 - 35042 Rennes
| | - Pascal Haigron
- LTSI, Laboratoire Traitement du Signal et de l'Image
Institut National de la Santé et de la Recherche Médicale - U1099Université de Rennes 1 - Campus Universitaire de Beaulieu - Bât 22 - 35042 Rennes
| |
Collapse
|
16
|
Reaungamornrat S, Wang AS, Uneri A, Otake Y, Khanna AJ, Siewerdsen JH. Deformable image registration with local rigidity constraints for cone-beam CT-guided spine surgery. Phys Med Biol 2014; 59:3761-87. [PMID: 24937093 DOI: 10.1088/0031-9155/59/14/3761] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Image-guided spine surgery (IGSS) is associated with reduced co-morbidity and improved surgical outcome. However, precise localization of target anatomy and adjacent nerves and vessels relative to planning information (e.g., device trajectories) can be challenged by anatomical deformation. Rigid registration alone fails to account for deformation associated with changes in spine curvature, and conventional deformable registration fails to account for rigidity of the vertebrae, causing unrealistic distortions in the registered image that can confound high-precision surgery. We developed and evaluated a deformable registration method capable of preserving rigidity of bones while resolving the deformation of surrounding soft tissue. The method aligns preoperative CT to intraoperative cone-beam CT (CBCT) using free-form deformation (FFD) with constraints on rigid body motion imposed according to a simple intensity threshold of bone intensities. The constraints enforced three properties of a rigid transformation-namely, constraints on affinity (AC), orthogonality (OC), and properness (PC). The method also incorporated an injectivity constraint (IC) to preserve topology. Physical experiments involving phantoms, an ovine spine, and a human cadaver as well as digital simulations were performed to evaluate the sensitivity to registration parameters, preservation of rigid body morphology, and overall registration accuracy of constrained FFD in comparison to conventional unconstrained FFD (uFFD) and Demons registration. FFD with orthogonality and injectivity constraints (denoted FFD+OC+IC) demonstrated improved performance compared to uFFD and Demons. Affinity and properness constraints offered little or no additional improvement. The FFD+OC+IC method preserved rigid body morphology at near-ideal values of zero dilatation (D = 0.05, compared to 0.39 and 0.56 for uFFD and Demons, respectively) and shear (S = 0.08, compared to 0.36 and 0.44 for uFFD and Demons, respectively). Target registration error (TRE) was similarly improved for FFD+OC+IC (0.7 mm), compared to 1.4 and 1.8 mm for uFFD and Demons. Results were validated in human cadaver studies using CT and CBCT images, with FFD+OC+IC providing excellent preservation of rigid morphology and equivalent or improved TRE. The approach therefore overcomes distortions intrinsic to uFFD and could better facilitate high-precision IGSS.
Collapse
Affiliation(s)
- S Reaungamornrat
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | | | | | | | | |
Collapse
|