1
|
Geng H, Xiao D, Yang S, Fan J, Fu T, Lin Y, Bai Y, Ai D, Song H, Wang Y, Duan F, Yang J. CT2X-IRA: CT to x-ray image registration agent using domain-cross multi-scale-stride deep reinforcement learning. Phys Med Biol 2023; 68:175024. [PMID: 37549676 DOI: 10.1088/1361-6560/acede5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 08/07/2023] [Indexed: 08/09/2023]
Abstract
Objective.In computer-assisted minimally invasive surgery, the intraoperative x-ray image is enhanced by overlapping it with a preoperative CT volume to improve visualization of vital anatomical structures. Therefore, accurate and robust 3D/2D registration of CT volume and x-ray image is highly desired in clinical practices. However, previous registration methods were prone to initial misalignments and struggled with local minima, leading to issues of low accuracy and vulnerability.Approach.To improve registration performance, we propose a novel CT/x-ray image registration agent (CT2X-IRA) within a task-driven deep reinforcement learning framework, which contains three key strategies: (1) a multi-scale-stride learning mechanism provides multi-scale feature representation and flexible action step size, establishing fast and globally optimal convergence of the registration task. (2) A domain adaptation module reduces the domain gap between the x-ray image and digitally reconstructed radiograph projected from the CT volume, decreasing the sensitivity and uncertainty of the similarity measurement. (3) A weighted reward function facilitates CT2X-IRA in searching for the optimal transformation parameters, improving the estimation accuracy of out-of-plane transformation parameters under large initial misalignments.Main results.We evaluate the proposed CT2X-IRA on both the public and private clinical datasets, achieving target registration errors of 2.13 mm and 2.33 mm with the computation time of 1.5 s and 1.1 s, respectively, showing an accurate and fast workflow for CT/x-ray image rigid registration.Significance.The proposed CT2X-IRA obtains the accurate and robust 3D/2D registration of CT and x-ray images, suggesting its potential significance in clinical applications.
Collapse
Affiliation(s)
- Haixiao Geng
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Deqiang Xiao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Shuo Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Tianyu Fu
- School of Medical Engineering, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yucong Lin
- School of Medical Engineering, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yanhua Bai
- Department of Interventional Radiology, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Hong Song
- School of Computer Science, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yongtian Wang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Feng Duan
- Department of Interventional Radiology, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| |
Collapse
|
2
|
Naik RR, Bhat SN, Ampar N, Kundangar R. Realistic C-arm to pCT registration for vertebral localization in spine surgery. Med Biol Eng Comput 2022; 60:2271-2289. [PMID: 35680729 PMCID: PMC9294032 DOI: 10.1007/s11517-022-02600-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 04/28/2022] [Indexed: 11/29/2022]
Abstract
Abstract Spine surgeries are vulnerable to wrong-level surgeries and postoperative complications because of their complex structure. Unavailability of the 3D intraoperative imaging device, low-contrast intraoperative X-ray images, variable clinical and patient conditions, manual analyses, lack of skilled technicians, and human errors increase the chances of wrong-site or wrong-level surgeries. State of the art work refers 3D-2D image registration systems and other medical image processing techniques to address the complications associated with spine surgeries. Intensity-based 3D-2D image registration systems had been widely practiced across various clinical applications. However, these frameworks are limited to specific clinical conditions such as anatomy, dimension of image correspondence, and imaging modalities. Moreover, there are certain prerequisites for these frameworks to function in clinical application, such as dataset requirement, speed of computation, requirement of high-end system configuration, limited capture range, and multiple local maxima. A simple and effective registration framework was designed with a study objective of vertebral level identification and its pose estimation from intraoperative fluoroscopic images by combining intensity-based and iterative control point (ICP)–based 3D-2D registration. A hierarchical multi-stage registration framework was designed that comprises coarse and finer registration. The coarse registration was performed in two stages, i.e., intensity similarity-based spatial localization and source-to-detector localization based on the intervertebral distance correspondence between vertebral centroids in projected and intraoperative X-ray images. Finally, to speed up target localization in the intraoperative application, based on 3D-2D vertebral centroid correspondence, a rigid ICP-based finer registration was performed. The mean projection distance error (mPDE) measurement and visual similarity between projection image at finer registration point and intraoperative X-ray image and surgeons’ feedback were held accountable for the quality assurance of the designed registration framework. The average mPDE after peak signal to noise ratio (PSNR)–based coarse registration was 20.41mm. After the coarse registration in spatial region and source to detector direction, the average mPDE reduced to 12.18mm. On finer ICP-based registration, the mean mPDE was finally reduced to 0.36 mm. The approximate mean time required for the coarse registration, finer registration, and DRR image generation at the final registration point were 10 s, 15 s, and 1.5 min, respectively. The designed registration framework can act as a supporting tool for vertebral level localization and its pose estimation in an intraoperative environment. The framework was designed with the future perspective of intraoperative target localization and its pose estimation irrespective of the target anatomy. Graphical abstract ![]()
Collapse
Affiliation(s)
- Roshan Ramakrishna Naik
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Shyamasunder N Bhat
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Nishanth Ampar
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Raghuraj Kundangar
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| |
Collapse
|
3
|
Fully Automatic Registration Methods for Chest X-Ray Images. J Med Biol Eng 2021; 41:826-843. [PMID: 34744547 PMCID: PMC8563362 DOI: 10.1007/s40846-021-00666-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 10/14/2021] [Indexed: 11/29/2022]
Abstract
Purpose Image registration is important in medical applications accomplished by improving healthcare technology in recent years. Various studies have been proposed in medical applications, including clinical track of events and updating the treatment plan for radiotherapy and surgery. This study presents a fully automatic registration system for chest X-ray images to generate fusion results for difference analysis. Using the accurate alignment of the proposed system, the fusion result indicates the differences in the thoracic area during the treatment process. Methods The proposed method consists of a data normalization method, a hybrid L-SVM model to detect lungs, ribs and clavicles for object recognition, a landmark matching algorithm, two-stage transformation approaches and a fusion method for difference analysis to highlight the differences in the thoracic area. In evaluation, a preliminary test was performed to compare three transformation models, with a full evaluation process to compare the proposed method with two existing elastic registration methods. Results The results show that the proposed method produces significantly better results than two benchmark methods (P-value \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\le$$\end{document}≤ 0.001). The proposed system achieves the lowest mean registration error distance (MRED) (8.99 mm, 23.55 pixel) and the lowest mean registration error ratio (MRER) w.r.t. the length of image diagonal (1.61%) compared to the two benchmark approaches with MRED (15.64 mm, 40.97 pixel) and (180.5 mm, 472.69 pixel) and MRER (2.81%) and (32.51%), respectively. Conclusions The experimental results show that the proposed method is capable of accurately aligning the chest X-ray images acquired at different times, assisting doctors to trace individual health status, evaluate treatment effectiveness and monitor patient recovery progress for thoracic diseases.
Collapse
|
4
|
Andreozzi E, Fratini A, Esposito D, Cesarelli M, Bifulco P. Toward a priori noise characterization for real-time edge-aware denoising in fluoroscopic devices. Biomed Eng Online 2021; 20:36. [PMID: 33827586 PMCID: PMC8028787 DOI: 10.1186/s12938-021-00874-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 03/28/2021] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Low-dose X-ray images have become increasingly popular in the last decades, due to the need to guarantee the lowest reasonable patient's exposure. Dose reduction causes a substantial increase of quantum noise, which needs to be suitably suppressed. In particular, real-time denoising is required to support common interventional fluoroscopy procedures. The knowledge of noise statistics provides precious information that helps to improve denoising performances, thus making noise estimation a crucial task for effective denoising strategies. Noise statistics depend on different factors, but are mainly influenced by the X-ray tube settings, which may vary even within the same procedure. This complicates real-time denoising, because noise estimation should be repeated after any changes in tube settings, which would be hardly feasible in practice. This work investigates the feasibility of an a priori characterization of noise for a single fluoroscopic device, which would obviate the need for inferring noise statics prior to each new images acquisition. The noise estimation algorithm used in this study was tested in silico to assess its accuracy and reliability. Then, real sequences were acquired by imaging two different X-ray phantoms via a commercial fluoroscopic device at various X-ray tube settings. Finally, noise estimation was performed to assess the matching of noise statistics inferred from two different sequences, acquired independently in the same operating conditions. RESULTS The noise estimation algorithm proved capable of retrieving noise statistics, regardless of the particular imaged scene, also achieving good results even by using only 10 frames (mean percentage error lower than 2%). The tests performed on the real fluoroscopic sequences confirmed that the estimated noise statistics are independent of the particular informational content of the scene from which they have been inferred, as they turned out to be consistent in sequences of the two different phantoms acquired independently with the same X-ray tube settings. CONCLUSIONS The encouraging results suggest that an a priori characterization of noise for a single fluoroscopic device is feasible and could improve the actual implementation of real-time denoising strategies that take advantage of noise statistics to improve the trade-off between noise reduction and details preservation.
Collapse
Affiliation(s)
- Emilio Andreozzi
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy
| | - Antonio Fratini
- Biomedical Engineering, School of Life and Health Sciences, Aston University, Birmingham, B4 7ET UK
| | - Daniele Esposito
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy
| | - Mario Cesarelli
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy
| | - Paolo Bifulco
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy
| |
Collapse
|
5
|
Postolka B, List R, Thelen B, Schütz P, Taylor WR, Zheng G. Evaluation of an intensity-based algorithm for 2D/3D registration of natural knee videofluoroscopy data. Med Eng Phys 2020; 77:107-113. [PMID: 31980316 DOI: 10.1016/j.medengphy.2020.01.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 09/24/2019] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
The accurate quantification of in-vivo tibio-femoral kinematics is essential for understanding joint functionality, but determination of the 3D pose of bones from 2D single-plane fluoroscopic images remains challenging. We aimed to evaluate the accuracy, reliability and repeatability of an intensity-based 2D/3D registration algorithm. The accuracy was evaluated using fluoroscopic images of 2 radiopaque bones in 18 different poses, compared against a gold-standard fiducial calibration device. In addition, 3 natural femora and 3 natural tibiae were used to examine registration reliability and repeatability. Both manual fitting and intensity-based registration exhibited a mean absolute error of <1 mm in-plane. Overall, intensity-based registration of the femoral bone model revealed significantly higher translational and rotational errors than manual fitting, while no statistical differences (except for y-axis translation) were found for the tibial bone model. The repeatability of 108 intensity-based registrations showed mean in-plane standard deviations of 0.23-0.56 mm, but out-of-plane position repeatability was lower (mean SD: femur 7.98 mm, tibia 6.96 mm). SDs for rotations averaged 0.77-2.52°. While the algorithm registered some images extremely well, other images clearly required manual intervention. When the algorithm registered the bones repeatably, it was also accurate, suggesting an approach that includes manual intervention could become practical for efficient and accurate registration.
Collapse
Affiliation(s)
- Barbara Postolka
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Renate List
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Benedikt Thelen
- University of Berne, Institute for Surgical Technology & Biomechanics, Stauffacherstrasse 78, 3014 Bern, Switzerland.
| | - Pascal Schütz
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - William R Taylor
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Guoyan Zheng
- University of Berne, Institute for Surgical Technology & Biomechanics, Stauffacherstrasse 78, 3014 Bern, Switzerland.
| |
Collapse
|
6
|
Sarno A, Andreozzi E, De Caro D, Di Meo G, Strollo AGM, Cesarelli M, Bifulco P. Real-time algorithm for Poissonian noise reduction in low-dose fluoroscopy: performance evaluation. Biomed Eng Online 2019; 18:94. [PMID: 31511017 PMCID: PMC6737613 DOI: 10.1186/s12938-019-0713-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 08/31/2019] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND Quantum noise intrinsically limits the quality of fluoroscopic images. The lower is the X-ray dose the higher is the noise. Fluoroscopy video processing can enhance image quality and allows further patient's dose lowering. This study aims to assess the performances achieved by a Noise Variance Conditioned Average (NVCA) spatio-temporal filter for real-time denoising of fluoroscopic sequences. The filter is specifically designed for quantum noise suppression and edge preservation. It is an average filter that excludes neighborhood pixel values exceeding noise statistic limits, by means of a threshold which depends on the local noise standard deviation, to preserve the image spatial resolution. The performances were evaluated in terms of contrast-to-noise-ratio (CNR) increment, image blurring (full width of the half maximum of the line spread function) and computational time. The NVCA filter performances were compared to those achieved by simple moving average filters and the state-of-the-art video denoising block matching-4D (VBM4D) algorithm. The influence of the NVCA filter size and threshold on the final image quality was evaluated too. RESULTS For NVCA filter mask size of 5 × 5 × 5 pixels (the third dimension represents the temporal extent of the filter) and a threshold level equal to 2 times the local noise standard deviation, the NVCA filter achieved a 10% increase of the CNR with respect to the unfiltered sequence, while the VBM4D achieved a 14% increase. In the case of NVCA, the edge blurring did not depend on the speed of the moving objects; on the other hand, the spatial resolution worsened of about 2.2 times by doubling the objects speed with VBM4D. The NVCA mask size and the local noise-threshold level are critical for final image quality. The computational time of the NVCA filter was found to be just few percentages of that required for the VBM4D filter. CONCLUSIONS The NVCA filter obtained a better image quality compared to simple moving average filters, and a lower but comparable quality when compared with the VBM4D filter. The NVCA filter showed to preserve edge sharpness, in particular in the case of moving objects (performing even better than VBM4D). The simplicity of the NVCA filter and its low computational burden make this filter suitable for real-time video processing and its hardware implementation is ready to be included in future fluoroscopy devices, offering further lowering of patient's X-ray dose.
Collapse
Affiliation(s)
- A Sarno
- Università di Napoli, "Federico II", dip. di Fisica "E. Pancini" & INFN sez. di Napoli, Via Cintia, 80126, Naples, Italy.
| | - E Andreozzi
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
- Istituti Clinici Scientifici Maugeri S.p.A.-Società Benefit, Via S. Maugeri, 4, 27100, Pavia, Italy
| | - D De Caro
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
| | - G Di Meo
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
| | - A G M Strollo
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
| | - M Cesarelli
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
- Istituti Clinici Scientifici Maugeri S.p.A.-Società Benefit, Via S. Maugeri, 4, 27100, Pavia, Italy
| | - P Bifulco
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
- Istituti Clinici Scientifici Maugeri S.p.A.-Società Benefit, Via S. Maugeri, 4, 27100, Pavia, Italy
| |
Collapse
|
7
|
List R, Postolka B, Schütz P, Hitz M, Schwilch P, Gerber H, Ferguson SJ, Taylor WR. A moving fluoroscope to capture tibiofemoral kinematics during complete cycles of free level and downhill walking as well as stair descent. PLoS One 2017; 12:e0185952. [PMID: 29016647 PMCID: PMC5633186 DOI: 10.1371/journal.pone.0185952] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Accepted: 09/24/2017] [Indexed: 11/18/2022] Open
Abstract
Videofluoroscopy has been shown to provide essential information in the evaluation of the functionality of total knee arthroplasties. However, due to the limitation in the field of view, most systems can only assess knee kinematics during highly restricted movements. To avoid the limitations of a static image intensifier, a moving fluoroscope has been presented as a standalone system that allows tracking of the knee during multiple complete cycles of level- and downhill-walking, as well as stair descent, in combination with the synchronous assessment of ground reaction forces and whole body skin marker measurements. Here, we assess the ability of the system to keep the knee in the field of view of the image intensifier. By measuring ten total knee arthroplasty subjects, we demonstrate that it is possible to maintain the knee to within 1.8 ± 1.4 cm vertically and 4.0 ± 2.6 cm horizontally of the centre of the intensifier throughout full cycles of activities of daily living. Since control of the system is based on real-time feedback of a wire sensor, the system is not dependent on repeatable gait patterns, but is rather able to capture pathological motion patterns with low inter-trial repeatability.
Collapse
Affiliation(s)
- Renate List
- Institute for Biomechanics, ETH Zurich, Zurich, Switzerland
- * E-mail:
| | | | - Pascal Schütz
- Institute for Biomechanics, ETH Zurich, Zurich, Switzerland
| | - Marco Hitz
- Institute for Biomechanics, ETH Zurich, Zurich, Switzerland
| | - Peter Schwilch
- Institute for Biomechanics, ETH Zurich, Zurich, Switzerland
| | - Hans Gerber
- Institute for Biomechanics, ETH Zurich, Zurich, Switzerland
| | | | | |
Collapse
|
8
|
Ketcha MD, De Silva T, Uneri A, Jacobson MW, Goerres J, Kleinszig G, Vogt S, Wolinsky JP, Siewerdsen JH. Multi-stage 3D-2D registration for correction of anatomical deformation in image-guided spine surgery. Phys Med Biol 2017; 62:4604-4622. [PMID: 28375139 PMCID: PMC5755708 DOI: 10.1088/1361-6560/aa6b3e] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
A multi-stage image-based 3D-2D registration method is presented that maps annotations in a 3D image (e.g. point labels annotating individual vertebrae in preoperative CT) to an intraoperative radiograph in which the patient has undergone non-rigid anatomical deformation due to changes in patient positioning or due to the intervention itself. The proposed method (termed msLevelCheck) extends a previous rigid registration solution (LevelCheck) to provide an accurate mapping of vertebral labels in the presence of spinal deformation. The method employs a multi-stage series of rigid 3D-2D registrations performed on sets of automatically determined and increasingly localized sub-images, with the final stage achieving a rigid mapping for each label to yield a locally rigid yet globally deformable solution. The method was evaluated first in a phantom study in which a CT image of the spine was acquired followed by a series of 7 mobile radiographs with increasing degree of deformation applied. Second, the method was validated using a clinical data set of patients exhibiting strong spinal deformation during thoracolumbar spine surgery. Registration accuracy was assessed using projection distance error (PDE) and failure rate (PDE > 20 mm-i.e. label registered outside vertebra). The msLevelCheck method was able to register all vertebrae accurately for all cases of deformation in the phantom study, improving the maximum PDE of the rigid method from 22.4 mm to 3.9 mm. The clinical study demonstrated the feasibility of the approach in real patient data by accurately registering all vertebral labels in each case, eliminating all instances of failure encountered in the conventional rigid method. The multi-stage approach demonstrated accurate mapping of vertebral labels in the presence of strong spinal deformation. The msLevelCheck method maintains other advantageous aspects of the original LevelCheck method (e.g. compatibility with standard clinical workflow, large capture range, and robustness against mismatch in image content) and extends capability to cases exhibiting strong changes in spinal curvature.
Collapse
Affiliation(s)
- M D Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
9
|
Ghafurian S, Hacihaliloglu I, Metaxas DN, Tan V, Li K. A computationally efficient 3D/2D registration method based on image gradient direction probability density function. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.07.070] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
10
|
Vascular image registration techniques: A living review. Med Image Anal 2017; 35:1-17. [DOI: 10.1016/j.media.2016.05.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Revised: 05/06/2016] [Accepted: 05/13/2016] [Indexed: 11/19/2022]
|
11
|
Bouget D, Allan M, Stoyanov D, Jannin P. Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal 2016; 35:633-654. [PMID: 27744253 DOI: 10.1016/j.media.2016.09.003] [Citation(s) in RCA: 101] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2016] [Revised: 06/26/2016] [Accepted: 09/05/2016] [Indexed: 11/16/2022]
Abstract
In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: "surgical tool detection", "surgical tool tracking", "surgical instrument detection" and "surgical instrument tracking" limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement.
Collapse
Affiliation(s)
- David Bouget
- Medicis team, INSERM U1099, Université de Rennes 1 LTSI, 35000 Rennes, France.
| | - Max Allan
- Center for Medical Image Computing. University College London, WC1E 6BT London, United Kingdom.
| | - Danail Stoyanov
- Center for Medical Image Computing. University College London, WC1E 6BT London, United Kingdom.
| | - Pierre Jannin
- Medicis team, INSERM U1099, Université de Rennes 1 LTSI, 35000 Rennes, France.
| |
Collapse
|
12
|
Kim J, Li S, Pradhan D, Hammoud R, Chen Q, Yin FF, Zhao Y, Kim JH, Movsas B. Comparison of Similarity Measures for Rigid-body CT/Dual X-ray Image Registrations. Technol Cancer Res Treat 2016; 6:337-46. [PMID: 17668942 DOI: 10.1177/153303460700600411] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
A set of experiments were conducted to evaluate six similarity measures for intensity-based rigid-body 3D/2D image registration. Similarity measure is an index that measures the similarity between a digitally reconstructed radiograph (DRR) and an x-ray planar image. The registration is accomplished by maximizing the sum of the similarity measures between biplane x-ray images and the corresponding DRRs in an iterative fashion. We have evaluated the accuracy and attraction ranges of the registrations using six different similarity measures on phantom experiments for head, thorax, and pelvis. The images were acquired using Varian Medial System On-Board Imager. Our results indicated that normalized cross correlation and entropy of difference showed a wide attraction range (62 deg and 83 mm mean attraction range, ωmean), but the worst accuracy (4.2 mm maximum error, emax). The gradient-based similarity measures, gradient correlation and gradient difference, and the pattern intensity showed sub-millimeter accuracy, but narrow attraction ranges ( ωmean=29 deg, 31 mm). Mutual information was in-between of these two groups ( emax=2.5 mm, ωmean= 48 deg, 52 mm). On the data of 120 x-ray pairs from eight IRB approved prostate patients, the gradient difference showed the best accuracy. In the clinical applications, registrations starting with the mutual information followed by the gradient difference may provide the best accuracy and the most robustness.
Collapse
Affiliation(s)
- Jinkoo Kim
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI 48202, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
13
|
Fully automated 2D-3D registration and verification. Med Image Anal 2015; 26:108-19. [PMID: 26387052 DOI: 10.1016/j.media.2015.08.005] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Revised: 07/17/2015] [Accepted: 08/20/2015] [Indexed: 11/24/2022]
Abstract
Clinical application of 2D-3D registration technology often requires a significant amount of human interaction during initialisation and result verification. This is one of the main barriers to more widespread clinical use of this technology. We propose novel techniques for automated initial pose estimation of the 3D data and verification of the registration result, and show how these techniques can be combined to enable fully automated 2D-3D registration, particularly in the case of a vertebra based system. The initialisation method is based on preoperative computation of 2D templates over a wide range of 3D poses. These templates are used to apply the Generalised Hough Transform to the intraoperative 2D image and the sought 3D pose is selected with the combined use of the generated accumulator arrays and a Gradient Difference Similarity Measure. On the verification side, two algorithms are proposed: one using normalised features based on the similarity value and the other based on the pose agreement between multiple vertebra based registrations. The proposed methods are employed here for CT to fluoroscopy registration and are trained and tested with data from 31 clinical procedures with 417 low dose, i.e. low quality, high noise interventional fluoroscopy images. When similarity value based verification is used, the fully automated system achieves a 95.73% correct registration rate, whereas a no registration result is produced for the remaining 4.27% of cases (i.e. incorrect registration rate is 0%). The system also automatically detects input images outside its operating range.
Collapse
|
14
|
Alhrishy M, Varnavas A, Carrell T, King A, Penney G. Interventional digital tomosynthesis from a standard fluoroscopy system using 2D-3D registration. Med Image Anal 2015; 19:137-48. [DOI: 10.1016/j.media.2014.10.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2014] [Revised: 07/29/2014] [Accepted: 10/07/2014] [Indexed: 10/24/2022]
|
15
|
Akter M, Lambert AJ, Pickering MR, Scarvell JM, Smith PN. Robust initialisation for single-plane 3D CT to 2D fluoroscopy image registration. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2014. [DOI: 10.1080/21681163.2014.897649] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Chuang HC, Huang DY, Tien DC, Wu RH, Hsu CH. A respiratory compensating system: design and performance evaluation. J Appl Clin Med Phys 2014; 15:4710. [PMID: 24892345 PMCID: PMC5711063 DOI: 10.1120/jacmp.v15i3.4710] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Revised: 02/08/2014] [Accepted: 02/03/2014] [Indexed: 12/25/2022] Open
Abstract
This study proposes a respiratory compensating system which is mounted on the top of the treatment couch for reverse motion, opposite from the direction of the targets (diaphragm and hemostatic clip), in order to offset organ displacement generated by respiratory motion. Traditionally, in the treatment of cancer patients, doctors must increase the field size for radiation therapy of tumors because organs move with respiratory motion, which causes radiation‐induced inflammation on the normal tissues (organ at risk (OAR)) while killing cancer cells, and thereby reducing the patient's quality of life. This study uses a strain gauge as a respiratory signal capture device to obtain abdomen respiratory signals, a proposed respiratory simulation system (RSS) and respiratory compensating system to experiment how to offset the organ displacement caused by respiratory movement and compensation effect. This study verifies the effect of the respiratory compensating system in offsetting the target displacement using two methods. The first method uses linac (medical linear accelerator) to irradiate a 300 cGy dose on the EBT film (GAFCHROMIC EBT film). The second method uses a strain gauge to capture the patients' respiratory signals, while using fluoroscopy to observe in vivo targets, such as a diaphragm, to enable the respiratory compensating system to offset the displacements of targets in superior‐inferior (SI) direction. Testing results show that the RSS position error is approximately 0.45 ~ 1.42 mm, while the respiratory compensating system position error is approximately 0.48 ~ 1.42 mm. From the EBT film profiles based on different input to the RSS, the results suggest that when the input respiratory signals of RSS are sine wave signals, the average dose (%) in the target area is improved by 1.4% ~ 24.4%, and improved in the 95% isodose area by 15.3% ~ 76.9% after compensation. If the respiratory signals input into the RSS respiratory signals are actual human respiratory signals, the average dose (%) in the target area is improved by 31.8% ~ 67.7%, and improved in the 95% isodose area by 15.3% ~ 86.4% (the above rates of improvements will increase with increasing respiratory motion displacement) after compensation. The experimental results from the second method suggested that about 67.3% ~ 82.5% displacement can be offset. In addition, gamma passing rate after compensation can be improved to 100% only when the displacement of the respiratory motion is within 10 ~ 30 mm. This study proves that the proposed system can contribute to the compensation of organ displacement caused by respiratory motion, enabling physicians to use lower doses and smaller field sizes in the treatment of tumors of cancer patients. PACS number: 87.19. Wx; 87.55. Km
Collapse
|
17
|
Uneri A, Otake Y, Wang AS, Kleinszig G, Vogt S, Khanna AJ, Siewerdsen JH. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy. Phys Med Biol 2013; 59:271-87. [PMID: 24351769 DOI: 10.1088/0031-9155/59/2/271] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.
Collapse
Affiliation(s)
- A Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | | | | | | | | | | |
Collapse
|
18
|
Muhit AA, Pickering MR, Scarvell JM, Ward T, Smith PN. Image-assisted non-invasive and dynamic biomechanical analysis of human joints. Phys Med Biol 2013; 58:4679-702. [DOI: 10.1088/0031-9155/58/13/4679] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
19
|
Varnavas A, Carrell T, Penney G. Increasing the automation of a 2D-3D registration system. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:387-399. [PMID: 23362246 DOI: 10.1109/tmi.2012.2227337] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Routine clinical use of 2D-3D registration algorithms for Image Guided Surgery remains limited. A key aspect for routine clinical use of this technology is its degree of automation, i.e., the amount of necessary knowledgeable interaction between the clinicians and the registration system. Current image-based registration approaches usually require knowledgeable manual interaction during two stages: for initial pose estimation and for verification of produced results. We propose four novel techniques, particularly suited to vertebra-based registration systems, which can significantly automate both of the above stages. Two of these techniques are based upon the intraoperative "insertion" of a virtual fiducial marker into the preoperative data. The remaining two techniques use the final registration similarity value between multiple CT vertebrae and a single fluoroscopy vertebra. The proposed methods were evaluated with data from 31 operations (31 CT scans, 419 fluoroscopy images). Results show these methods can remove the need for manual vertebra identification during initial pose estimation, and were also very effective for result verification, producing a combined true positive rate of 100% and false positive rate equal to zero. This large decrease in required knowledgeable interaction is an important contribution aiming to enable more widespread use of 2D-3D registration technology.
Collapse
Affiliation(s)
- Andreas Varnavas
- Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King’s College London, King’s Health Partners, St. Thomas’ Hospital, London, UK.
| | | | | |
Collapse
|
20
|
Fisher M, Dorgham O, Laycock SD. Fast reconstructed radiographs from octree-compressed volumetric data. Int J Comput Assist Radiol Surg 2012; 8:313-22. [PMID: 22821505 DOI: 10.1007/s11548-012-0783-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2012] [Accepted: 07/04/2012] [Indexed: 10/28/2022]
Abstract
PURPOSE Simulated 2D X-ray images called digitally reconstructed radiographs (DRRs) have important applications within medical image registration frameworks where they are compared with reference X-rays or used in implementations of digital tomosynthesis (DTS). However, rendering DRRs from a CT volume is computationally demanding and relatively slow using the conventional ray-casting algorithm. Image-guided radiation therapy systems using DTS to verify target location require a large number DRRs to be precomputed since there is insufficient time within the automatic image registration procedure to generate DRRs and search for an optimal pose. METHOD DRRs were rendered from octree-compressed CT data. Previous work showed that octree-compressed volumes rendered by conventional ray casting deliver a registration with acceptable clinical accuracy, but efficiently rendering the irregular grid of an octree data structure is a challenge for conventional ray casting. We address this by using vertex and fragment shaders of modern graphics processing units (GPUs) to directly project internal spaces of the octree, represented by textured particle sprites, onto the view plane. The texture is procedurally generated and depends on the CT pose. RESULTS The performance of this new algorithm was found to be 4 times faster than that of a ray-casting algorithm implemented using NVIDIA™Compute Unified Device Architecture (CUDA™) on an equivalent GPU (~95 % octree compression). Rendering artifacts are apparent (consistent with other splatting algorithm), but image quality tends to improve with compression and fewer particles are needed. A peak signal-to-noise ratio analysis confirmed that the images rendered from compressed volumes were of marginally better quality to those rendered using Gaussian footprints. CONCLUSIONS Using octree-encoded DRRs within a 2D/3D registration framework indicated the approach may be useful in accelerating automatic image registration.
Collapse
Affiliation(s)
- Mark Fisher
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.
| | | | | |
Collapse
|
21
|
Markelj P, Tomaževič D, Likar B, Pernuš F. A review of 3D/2D registration methods for image-guided interventions. Med Image Anal 2012; 16:642-61. [PMID: 20452269 DOI: 10.1016/j.media.2010.03.005] [Citation(s) in RCA: 328] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2009] [Revised: 02/22/2010] [Accepted: 03/30/2010] [Indexed: 02/07/2023]
|
22
|
Otake Y, Armand M, Armiger RS, Kutzer MD, Basafa E, Kazanzides P, Taylor RH. Intraoperative image-based multiview 2D/3D registration for image-guided orthopaedic surgery: incorporation of fiducial-based C-arm tracking and GPU-acceleration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:948-962. [PMID: 22113773 PMCID: PMC4451116 DOI: 10.1109/tmi.2011.2176555] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Mehran Armand
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Robert S. Armiger
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Michael D. Kutzer
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Ehsan Basafa
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Kazanzides
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
23
|
Hacihaliloglu I, Abugharbieh R, Hodgson AJ, Rohling RN, Guy P. Automatic bone localization and fracture detection from volumetric ultrasound images using 3-D local phase features. ULTRASOUND IN MEDICINE & BIOLOGY 2012; 38:128-144. [PMID: 22104523 DOI: 10.1016/j.ultrasmedbio.2011.10.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2011] [Revised: 09/05/2011] [Accepted: 10/13/2011] [Indexed: 05/31/2023]
Abstract
This article presents a novel method for bone segmentation from three-dimensional (3-D) ultrasound images that derives intensity-invariant 3-D local image phase measures that are then employed for extracting ridge-like features similar to those that occur at soft tissue/bone interfaces. The main contributions in this article include: (1) the extension of our previously proposed phase-symmetry-based bone surface extraction from two-dimensional (2-D) to 3-D images using 3-D Log-Gabor filters; (2) the design of a new framework for accuracy evaluation based on using computed tomography as a gold standard that allows the assessment of surface localization accuracy across the entire 3-D surface; (3) the quantitative validation of accuracy of our 3-D phase-processing approach on both intact and fractured bone surfaces using phantoms and ex vivo 3-D ultrasound scans; and (4) the qualitative validation obtained by scanning emergency room patients with distal radius and pelvis fractures. We show a 41% improvement in surface localization error over the previous 2-D phase symmetry method. The results demonstrate clearly visible segmentations of bone surfaces with a localization accuracy of <0.6 mm and mean errors in estimating fracture displacements below 0.6 mm. The results show that the proposed method is successful even for situations when the bone surface response is weak due to shadowing from muscle and fascia interfaces above the bone, which is a situation where the 2-D method fails.
Collapse
Affiliation(s)
- Ilker Hacihaliloglu
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | | | | | | |
Collapse
|
24
|
Ackland DC, Keynejad F, Pandy MG. Future trends in the use of X-ray fluoroscopy for the measurement and modelling of joint motion. Proc Inst Mech Eng H 2011; 225:1136-48. [DOI: 10.1177/0954411911422840] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Knowledge of three-dimensional skeletal kinematics during functional activities such as walking, is required for accurate modelling of joint motion and loading, and is important in identifying the effects of injury and disease. For example, accurate measurement of joint kinematics is essential in understanding the pathogenesis of osteoarthritis and its symptoms and for developing strategies to alleviate joint pain. Bi-plane X-ray fluoroscopy has the capacity to accurately and non-invasively measure human joint motion in vivo. Joint kinematics obtained using bi-plane X-ray fluoroscopy will aid in the development of more complex musculoskeletal models, which may be used to assess joint function and disease and plan surgical interventions and post-operative rehabilitation strategies. At present, however, commercial C-arm systems constrain the motion of the subject within the imaging field of view, thus precluding recording of motions such as overground gait. These fluoroscopy systems also operate at low frame rates and therefore cannot accurately capture high-speed joint motion during tasks such as running and throwing. In the future, bi-plane fluoroscopy systems may include computer-controlled tracking for the measurement of joint kinematics over entire cycles of overground gait without constraining motion of the subject. High-speed cameras will facilitate measurement of high-impulse joint motions, and computationally efficient pose-estimation software may provide a fast and fully automated process for quantification of natural joint motion.
Collapse
Affiliation(s)
- D C Ackland
- Department of Mechanical Engineering, University of Melbourne, Melbourne, Australia
| | - F Keynejad
- Department of Mechanical Engineering, University of Melbourne, Melbourne, Australia
| | - M G Pandy
- Department of Mechanical Engineering, University of Melbourne, Melbourne, Australia
| |
Collapse
|
25
|
Ruijters D, Homan R, Mielekamp P, van de Haar P, Babic D. Validation of 3D multimodality roadmapping in interventional neuroradiology. Phys Med Biol 2011; 56:5335-54. [PMID: 21799235 DOI: 10.1088/0031-9155/56/16/017] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Three-dimensional multimodality roadmapping is entering clinical routine utilization for neuro-vascular treatment. Its purpose is to navigate intra-arterial and intra-venous endovascular devices through complex vascular anatomy by fusing pre-operative computed tomography (CT) or magnetic resonance (MR) with the live fluoroscopy image. The fused image presents the real-time position of the intra-vascular devices together with the patient's 3D vascular morphology and its soft-tissue context. This paper investigates the effectiveness, accuracy, robustness and computation times of the described methods in order to assess their suitability for the intended clinical purpose: accurate interventional navigation. The mutual information-based 3D-3D registration proved to be of sub-voxel accuracy and yielded an average registration error of 0.515 mm and the live machine-based 2D-3D registration delivered an average error of less than 0.2 mm. The capture range of the image-based 3D-3D registration was investigated to characterize its robustness, and yielded an extent of 35 mm and 25° for >80% of the datasets for registration of 3D rotational angiography (3DRA) with CT, and 15 mm and 20° for >80% of the datasets for registration of 3DRA with MR data. The image-based 3D-3D registration could be computed within 8 s, while applying the machine-based 2D-3D registration only took 1.5 µs, which makes them very suitable for interventional use.
Collapse
Affiliation(s)
- Daniel Ruijters
- Interventional X-Ray (iXR), Philips Healthcare, Best, The Netherlands.
| | | | | | | | | |
Collapse
|
26
|
van der Bom MJ, Bartels LW, Gounis MJ, Homan R, Timmer J, Viergever MA, Pluim JPW. Robust initialization of 2D-3D image registration using the projection-slice theorem and phase correlation. Med Phys 2010; 37:1884-92. [PMID: 20443510 DOI: 10.1118/1.3366252] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The image registration literature comprises many methods for 2D-3D registration for which accuracy has been established in a variety of applications. However, clinical application is limited by a small capture range. Initial offsets outside the capture range of a registration method will not converge to a successful registration. Previously reported capture ranges, defined as the 95% success range, are in the order of 4-11 mm mean target registration error. In this article, a relatively computationally inexpensive and robust estimation method is proposed with the objective to enlarge the capture range. METHODS The method uses the projection-slice theorem in combination with phase correlation in order to estimate the transform parameters, which provides an initialization of the subsequent registration procedure. RESULTS The feasibility of the method was evaluated by experiments using digitally reconstructed radiographs generated from in vivo 3D-RX data. With these experiments it was shown that the projection-slice theorem provides successful estimates of the rotational transform parameters for perspective projections and in case of translational offsets. The method was further tested on ex vivo ovine x-ray data. In 95% of the cases, the method yielded successful estimates for initial mean target registration errors up to 19.5 mm. Finally, the method was evaluated as an initialization method for an intensity-based 2D-3D registration method. The uninitialized and initialized registration experiments had success rates of 28.8% and 68.6%, respectively. CONCLUSIONS The authors have shown that the initialization method based on the projection-slice theorem and phase correlation yields adequate initializations for existing registration methods, thereby substantially enlarging the capture range of these methods.
Collapse
Affiliation(s)
- M J van der Bom
- Image Sciences Institute, University Medical Center Utrecht, QOS.459, P.O. Box 85500, 3508 GA Utrecht, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
27
|
Three-dimensional motion study of femur, tibia, and patella at the knee joint from bi-plane fluoroscopy and CT images. Radiol Phys Technol 2010; 3:151-8. [PMID: 20821089 DOI: 10.1007/s12194-010-0090-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2009] [Revised: 03/08/2010] [Accepted: 03/08/2010] [Indexed: 10/19/2022]
|
28
|
Tsai TY, Lu TW, Chen CM, Kuo MY, Hsu HC. A volumetric model-based 2D to 3D registration method for measuring kinematics of natural knees with single-plane fluoroscopy. Med Phys 2010; 37:1273-84. [DOI: 10.1118/1.3301596] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
29
|
|
30
|
Munbodh R, Tagare HD, Chen Z, Jaffray DA, Moseley DJ, Knisely JPS, Duncan JS. 2D-3D registration for prostate radiation therapy based on a statistical model of transmission images. Med Phys 2009; 36:4555-68. [PMID: 19928087 DOI: 10.1118/1.3213531] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Reshma Munbodh
- Department of Radiology, Weill Medical College of Cornell University, New York, New York 10021, USA.
| | | | | | | | | | | | | |
Collapse
|
31
|
Narayanasamy G, LeCarpentier GL, Roubidoux M, Fowlkes JB, Schott AF, Carson PL. Spatial registration of temporally separated whole breast 3D ultrasound images. Med Phys 2009; 36:4288-300. [PMID: 19810503 PMCID: PMC2749445 DOI: 10.1118/1.3193678] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2008] [Revised: 07/11/2009] [Accepted: 07/13/2009] [Indexed: 11/07/2022] Open
Abstract
The purpose of this study was to evaluate the potential for use of image volume based registration (IVBaR) to aid in measurement of changes in the tumor during chemotherapy of breast cancer. Successful IVBaR could aid in the detection of such changes in response to neoadjuvant chemotherapy and potentially be useful for routine breast cancer screening and diagnosis. IVBaR was employed in a new method of automated estimation of tumor volume in studies following the radiologist identification of the tumor region in the prechemotherapy scan. The authors have also introduced a new semiautomated method for validation of registration based on Doppler ultrasound (U.S.) signals that are independent of the grayscale signals used for registration. This Institutional Review Board approved study was conducted on 10 patients undergoing chemotherapy and 14 patients with a suspicious/unknown mass scheduled to undergo biopsy. Reasonably reproducible mammographic positioning and nearly whole breast U.S. imaging were achieved. The image volume was registered offline with a mutual information cost function and global interpolation based on a thin-plate spline using MIAMI FUSE software developed at the University of Michigan. The success and accuracy of registration of the three dimensional (3D) U.S. image volume were measured by means of mean registration error (MRE). IVBaR was successful with MRE of 4.3 +/- 1.7 mm in 9 out of 10 reproducibility automated breast ultrasound (ABU) studies and in 12 out of 17 ABU image pairs collected before, during, or after 115 +/- 14 days of chemotherapy. Semiautomated tumor volume estimation was performed on registered image volumes giving 86 +/- 8% mean accuracy compared to the radiologist hand-segmented tumor volume on seven cases. Doppler studies yielded fractional volume of color pixels in the region surrounding the lesion and its change with changing breast compression. The Doppler study of patients with detectable blood flow included five patients with suspicious masses and three undergoing chemotherapy. Spatial alignment of the 3D blood vessel data from the Doppler studies provided independent measures for the validation of registration. In 15 Doppler image volume pairs scanned with differing breast compression, the mean centerline separation value was 1.5 +/- 0.6 mm, while MRE based on a few identifiable structural points common to the two grayscale image volumes was 1.1 +/- 0.6 mm. Another measure, the overlap ratio of blood vessels, was shown to increase from 0.32 to 0.59 (+84%) with IVBaR for pairs at various compression levels. These results show that successful registration of ABU scans may be accomplished for comparison and integration of information.
Collapse
Affiliation(s)
- Ganesh Narayanasamy
- Department of Radiology, and Applied Physics Program, University of Michigan, Ann Arbor Michigan 48109, USA
| | | | | | | | | | | |
Collapse
|
32
|
Huang X, Moore J, Guiraudon G, Jones DL, Bainbridge D, Ren J, Peters TM. Dynamic 2D ultrasound and 3D CT image registration of the beating heart. IEEE TRANSACTIONS ON MEDICAL IMAGING 2009; 28:1179-1189. [PMID: 19131293 DOI: 10.1109/tmi.2008.2011557] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Two-dimensional ultrasound (US) is widely used in minimally invasive cardiac procedures due to its convenience of use and noninvasive nature. However, the low quality of US images often limits their utility as a means for guiding procedures, since it is often difficult to relate the images to their anatomical context. To improve the interpretability of the US images while maintaining US as a flexible anatomical and functional real-time imaging modality, we describe a multimodality image navigation system that integrates 2D US images with their 3D context by registering them to high quality preoperative models based on magnetic resonance imaging (MRI) or computed tomography (CT) images. The mapping from such a model to the patient is completed using spatial and temporal registrations. Spatial registration is performed by a two-step rapid registration method that first approximately aligns the two images as a starting point to an automatic registration procedure. Temporal alignment is performed with the aid of electrocardiograph (ECG) signals and a latency compensation method. Registration accuracy is measured by calculating the TRE. Results show that the error between the US and preoperative images of a beating heart phantom is 1.7 +/-0.4 mm, with a similar performance being observed in in vivo animal experiments.
Collapse
Affiliation(s)
- Xishi Huang
- Imaging Research Laboratories, Robarts Research Institute, London, ON N6A5K8, Canada.
| | | | | | | | | | | | | |
Collapse
|
33
|
Ruijters D, ter Haar Romeny BM, Suetens P. Vesselness-based 2D-3D registration of the coronary arteries. Int J Comput Assist Radiol Surg 2009; 4:391-7. [PMID: 20033586 DOI: 10.1007/s11548-009-0316-z] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2009] [Accepted: 04/15/2009] [Indexed: 11/28/2022]
Abstract
PURPOSE Robust and accurate automated co-registration of the coronary arteries in 3D CTA and 2D X-ray angiography during percutaneous coronary interventions (PCI), in order to present a fused visualization. METHODS A novel vesselness-based similarity measure was developed, that avoids an explicit segmentation of the X-ray image. A stochastic optimizer searches the optimal registration using the similarity measure. RESULTS Both simulated data and clinical data were used to investigate the accuracy and capture range of the proposed method. The experiments show that the proposed method outperforms the iterative closest point method in terms of accuracy (average residual error of 0.42 mm vs. 1.44 mm) and capture range (average 71.1 mm/20.3 degrees vs. 14.1 mm/5.2 degrees ). CONCLUSION The proposed method has proven to be accurate and the capture range is ample for usage in PCI. Especially the absence of an explicit segmentation of the interventionally acquired X-ray images considerably aids the robustness of the method.
Collapse
Affiliation(s)
- Daniel Ruijters
- Philips Healthcare, Cardio/Vascular Innovation, Best, The Netherlands.
| | | | | |
Collapse
|
34
|
Dullin C, Zientkowska M, Napp J, Missbach-Guentner J, Krell HW, Muller F, Grabbe E, Tietze LF, Alves F. Semiautomatic Landmark-Based Two-Dimensional—Three-Dimensional Image Fusion in Living Mice: Correlation of Near-Infrared Fluorescence Imaging of Cy5.5-Labeled Antibodies with Flat-Panel Volume Computed Tomography. Mol Imaging 2009. [DOI: 10.2310/7290.2009.00001] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
35
|
Markelj P, Tomazevic D, Pernus F, Likar BT. Robust gradient-based 3-D/2-D registration of CT and MR to X-ray images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:1704-1714. [PMID: 19033086 DOI: 10.1109/tmi.2008.923984] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
One of the most important technical challenges in image-guided intervention is to obtain a precise transformation between the intrainterventional patient's anatomy and corresponding preinterventional 3-D image on which the intervention was planned. This goal can be achieved by acquiring intrainterventional 2-D images and matching them to the preinterventional 3-D image via 3-D/2-D image registration. A novel 3-D/2-D registration method is proposed in this paper. The method is based on robustly matching 3-D preinterventional image gradients and coarsely reconstructed 3-D gradients from the intrainterventional 2-D images. To improve the robustness of finding the correspondences between the two sets of gradients, hypothetical correspondences are searched for along normals to anatomical structures in 3-D images, while the final correspondences are established in an iterative process, combining the robust random sample consensus algorithm (RANSAC) and a special gradient matching criterion function. The proposed method was evaluated using the publicly available standardized evaluation methodology for 3-D/2-D registration, consisting of 3-D rotational X-ray, computed tomography, magnetic resonance (MR), and 2-D X-ray images of two spine segments, and standardized evaluation criteria. In this way, the proposed method could be objectively compared to the intensity, gradient, and reconstruction-based registration methods. The obtained results indicate that the proposed method performs favorably both in terms of registration accuracy and robustness. The method is especially superior when just a few X-ray images and when MR preinterventional images are used for registration, which are important advantages for many clinical applications.
Collapse
Affiliation(s)
- Primo Markelj
- University of Ljubljana, Faculty of Electrical Engineering, 1000 Ljubljana, Slovenia.
| | | | | | | |
Collapse
|
36
|
Lu TW, Tsai TY, Kuo MY, Hsu HC, Chen HL. In vivo three-dimensional kinematics of the normal knee during active extension under unloaded and loaded conditions using single-plane fluoroscopy. Med Eng Phys 2008; 30:1004-12. [DOI: 10.1016/j.medengphy.2008.03.001] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2007] [Revised: 01/27/2008] [Accepted: 03/03/2008] [Indexed: 01/28/2023]
|
37
|
Chen X, Gilkeson RC, Fei B. Automatic 3D-to-2D registration for CT and dual-energy digital radiography for calcification detection. Med Phys 2008; 34:4934-43. [PMID: 18196818 DOI: 10.1118/1.2805994] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DEDR). CT is an established tool for the detection of cardiac calcification. DEDR could be a cost-effective alternative screening tool. In order to utilize CT as the "gold standard" to evaluate the capability of DEDR images for the detection and localization of calcium, we developed an automatic, intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DEDR images. To generate digitally reconstructed radiography (DRR) from the CT volumes, we developed several projection algorithms using the fast shear-warp method. In particular, we created a Gaussian-weighted projection for this application. We used normalized mutual information (NMI) as the similarity measurement. Simulated projection images from CT values were fused with the corresponding DEDR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with a translation difference of less than 0.8 mm and a rotation difference of less than 0.2 degrees. For physical phantom images, the registration accuracy is 0.43 +/- 0.24 mm. Color overlay and 3D visualization of clinical images show that the two images registered well. The NMI values between the DRR and DEDR images improved from 0.21 +/- 0.03 before registration to 0.25 +/- 0.03 after registration. Registration errors measured from anatomic markers decreased from 27.6 +/- 13.6 mm before registration to 2.5 +/- 0.5 mm after registration. Our results show that the automatic 3D-to-2D registration is accurate and robust. This technique can provide a useful tool for correlating DEDR with CT images for screening coronary artery calcification.
Collapse
Affiliation(s)
- Xiang Chen
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio 44106, USA
| | | | | |
Collapse
|
38
|
Hummel J, Figl M, Bax M, Bergmann H, Birkfellner W. 2D/3D registration of endoscopic ultrasound to CT volume data. Phys Med Biol 2008; 53:4303-16. [PMID: 18653922 DOI: 10.1088/0031-9155/53/16/006] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
This paper describes a computer-aided navigation system using image fusion to support endoscopic interventions such as the accurate collection of biopsy specimens. An endoscope provides the physician with real-time ultrasound (US) and a video image. An image slice that corresponds to the corresponding image from the US scan head is derived from a preoperative computed tomography (CT) or magnetic resonance image volume data set using oblique reformatting and displayed side by side with the US image. The position of the image acquired by the US scan head is determined by a miniaturized electromagnetic tracking system (EMTS) after calibrating the endoscope's scan head. The transformation between the patient coordinate system and the preoperative data set is calculated using a 2D/3D registration. This is achieved by calibrating an intraoperative interventional CT slice with an optical tracking system (OTS) using the same algorithm as for the US calibration. The slice is then used for 2D/3D registration with the coordinate system of the preoperative volume. The fiducial registration error (FRE) for the US calibration was 2.0 mm +/- 0.4 mm; the interventional CT FRE was 0.36 +/- 0.12 mm; and the 2D/3D registration target registration error (TRE) was 1.8 +/- 0.3 mm. The point-to-point registration between the OTS and the EMTS had an FRE of 0.9 +/- 0.4 mm. Finally, we found an overall TRE for the complete system to be 3.9 +/- 0.6 mm.
Collapse
Affiliation(s)
- Johann Hummel
- Center of Biomedical Engineering and Physics, Medical University of Vienna, Vienna, Austria.
| | | | | | | | | |
Collapse
|
39
|
Instantiation and registration of statistical shape models of the femur and pelvis using 3D ultrasound imaging. Med Image Anal 2008; 12:358-74. [DOI: 10.1016/j.media.2007.12.006] [Citation(s) in RCA: 111] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2006] [Revised: 07/16/2007] [Accepted: 12/21/2007] [Indexed: 11/22/2022]
|
40
|
Fei B, Chen X, Wang H, Sabol JM, DuPont E, Gilkeson RC. Automatic registration of CT volumes and dual-energy digital radiography for detection of cardiac and lung diseases. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2008; 2006:1976-9. [PMID: 17945687 DOI: 10.1109/iembs.2006.259888] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We are investigating image processing and analysis techniques to improve the ability of dual-energy digital radiography (DR) for the detection of cardiac calcification. Computed tomography (CT) is an established tool for the diagnosis of coronary artery diseases. Dual-energy digital radiography could be a cost-effective alternative. In this study, we use three-dimensional (3D) CT images as the "gold standard" to evaluate the DR X-ray images for calcification detection. To this purpose, we developed an automatic registration method for 3D CT volumes and two-dimensional (2D) X-ray images. We call this 3D-to-2D registration. We first use a 3D CT image volume to simulate X-ray projection images and then register them with X-ray images. The registered CT projection images are then used to aid the interpretation dual-energy X-ray images for the detection of cardiac calcification. We acquired both CT and X-ray images from patients with coronary artery diseases. Experimental results show that the 3D-to-2D registration is accurate and useful for this new application.
Collapse
Affiliation(s)
- Baowei Fei
- Dept. of Radiol. & Biomed. Eng., Case Western Reserve Univ., Cleveland, OH 44106, USA.
| | | | | | | | | | | |
Collapse
|
41
|
Nakajima Y, Tashiro T, Sugano N, Yonenobu K, Koyama T, Maeda Y, Tamura Y, Saito M, Tamura S, Mitsuishi M, Sugita N, Sakuma I, Ochi T, Matsumoto Y. Fluoroscopic Bone Fragment Tracking for Surgical Navigation in Femur Fracture Reduction by Incorporating Optical Tracking of Hip Joint Rotation Center. IEEE Trans Biomed Eng 2007; 54:1703-6. [PMID: 17867363 DOI: 10.1109/tbme.2007.900822] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
A new method for fluoroscopic tracking of a proximal bone fragment in femoral fracture reduction is presented. The proposed method combines 2-D and 3-D image registration from single-view fluoroscopy with tracking of the head center position of the proximal femoral fragment to improve the accuracy of fluoroscopic registration without the need for repeated manual adjustment of the C-arm as required in stereo-view registrations. Kinematic knowledge of the hip joint, which has a positional correspondence with the femoral head center and the pelvis acetabular center, allows the position of the femoral fragment to be determined from pelvis tracking. The stability of the proposed method with respect to fluoroscopic image noise and the desired continuity of the fracture reduction operation is demonstrated, and the accuracy of tracking is shown to be superior to that achievable by single-view image registration, particularly in depth translation.
Collapse
|
42
|
Dandekar O, Shekhar R. FPGA-Accelerated Deformable Image Registration for Improved Target-Delineation During CT-Guided Interventions. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2007; 1:116-127. [PMID: 23851666 DOI: 10.1109/tbcas.2007.909023] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Minimally invasive image-guided interventions (IGIs) are time and cost efficient, minimize unintended damage to healthy tissue, and lead to faster patient recovery. With the advent of multislice computed tomography (CT), many IGIs are now being performed under volumetric CT guidance. Registering pre-and intraprocedural images for improved intraprocedural target delineation is a fundamental need in the IGI workflow. Earlier approaches to meet this need primarily employed rigid body approximation, which may not be valid because of nonrigid tissue misalignment between these images. Intensity-based automatic deformable registration is a promising option to correct for this misalignment; however, the long execution times of these algorithms have prevented their use in clinical workflow. This article presents a field-programmable gate array-based architecture for accelerated implementation of mutual information (Ml)-based deformable registration. The reported implementation reduces the execution time of MI-based deformable registration from hours to a few minutes. This work also demonstrates successful registration of abdominal intraprocedural noncontrast CT (iCT) images with preprocedural contrast-enhanced CT (preCT) and positron emission tomography (PET) images using the reported solution. The registration accuracy for this application was evaluated using 5 iCT-preCT and 5 iCT-PET image pairs. The registration accuracy of the hardware implementation is comparable with that achieved using a software implementation and is on the order of a few millimeters. This registration accuracy, coupled with the execution speed and compact implementation of the reported solution, makes it suitable for integration in the IGI-workflow.
Collapse
|
43
|
Zeng R, Fessler JA, Balter JM. Estimating 3-D respiratory motion from orbiting views by tomographic image registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:153-63. [PMID: 17304730 PMCID: PMC2851164 DOI: 10.1109/tmi.2006.889719] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Respiratory motion remains a significant source of errors in treatment planning for the thorax and upper abdomen. Recently, we proposed a method to estimate two-dimensional (2-D) object motion from a sequence of slowly rotating X-ray projection views, which we called deformation from orbiting views (DOVs). In this method, we model the motion as a time varying deformation of a static prior of the anatomy. We then optimize the parameters of the motion model by maximizing the similarity between the modeled and actual projection views. This paper extends the method to full three-dimensional (3-D) motion and cone-beam projection views. We address several practical issues for using a cone-beam computed tomography (CBCT) scanner that is integrated in a radiotherapy system, such as the effects of Compton scatter and the limited gantry rotation for one breathing cycle. We also present simulation and phantom results to illustrate the performance of this method.
Collapse
Affiliation(s)
- Rongping Zeng
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor 48109, USA.
| | | | | |
Collapse
|
44
|
Ho AK, Fu D, Cotrutz C, Hancock SL, Chang SD, Gibbs IC, Maurer CR, Adler JR. A Study of the Accuracy of CyberKnife Spinal Radiosurgery Using Skeletal Structure Tracking. Oper Neurosurg (Hagerstown) 2007; 60:ONS147-56; discussion ONS156. [PMID: 17297377 DOI: 10.1227/01.neu.0000249248.55923.ec] [Citation(s) in RCA: 71] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Abstract
Objective:
New technology has enabled the increasing use of radiosurgery to ablate spinal lesions. The first generation of the CyberKnife (Accuray, Inc., Sunnyvale, CA) image-guided radiosurgery system required implanted radiopaque markers (fiducials) to localize spinal targets. A recently developed and now commercially available spine tracking technology called Xsight (Accuray, Inc.) tracks skeletal structures and eliminates the need for implanted fiducials. The Xsight system localizes spinal targets by direct reference to the adjacent vertebral elements. This study sought to measure the accuracy of Xsight spine tracking and provide a qualitative assessment of overall system performance.
Methods:
Total system error, which is defined as the distance between the centroids of the planned and delivered dose distributions and represents all possible treatment planning and delivery errors, was measured using a realistic, anthropomorphic head-and-neck phantom. The Xsight tracking system error component of total system error was also computed by retrospectively analyzing image data obtained from eleven patients with a total of 44 implanted fiducials who underwent CyberKnife spinal radiosurgery.
Results:
The total system error of the Xsight targeting technology was measured to be 0.61 mm. The tracking system error component was found to be 0.49 mm.
Conclusion:
The Xsight spine tracking system is practically important because it is accurate and eliminates the use of implanted fiducials. Experience has shown this technology to be robust under a wide range of clinical circumstances.
Collapse
Affiliation(s)
- Anthony K Ho
- Department of Radiation Oncology, Stanford University Medical Center, Stanford, California 94305-5304, USA.
| | | | | | | | | | | | | | | |
Collapse
|
45
|
Munbodh R, Jaffray DA, Moseley DJ, Chen Z, Knisely JPS, Cathier P, Duncan JS. Automated 2D-3D registration of a radiograph and a cone beam CT using line-segment enhancement. Med Phys 2006; 33:1398-411. [PMID: 16752576 PMCID: PMC2796183 DOI: 10.1118/1.2192621] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
The objective of this study was to develop a fully automated two-dimensional (2D)-three-dimensional (3D) registration framework to quantify setup deviations in prostate radiation therapy from cone beam CT (CBCT) data and a single AP radiograph. A kilovoltage CBCT image and kilovoltage AP radiograph of an anthropomorphic phantom of the pelvis were acquired at 14 accurately known positions. The shifts in the phantom position were subsequently estimated by registering digitally reconstructed radiographs (DRRs) from the 3D CBCT scan to the AP radiographs through the correlation of enhanced linear image features mainly representing bony ridges. Linear features were enhanced by filtering the images with "sticks," short line segments which are varied in orientation to achieve the maximum projection value at every pixel in the image. The mean (and standard deviations) of the absolute errors in estimating translations along the three orthogonal axes in millimeters were 0.134 (0.096) AP(out-of-plane), 0.021 (0.023) ML and 0.020 (0.020) SI. The corresponding errors for rotations in degrees were 0.011 (0.009) AP, 0.029 (0.016) ML (out-of-plane), and 0.030 (0.028) SI (out-of-plane). Preliminary results with megavoltage patient data have also been reported. The results suggest that it may be possible to enhance anatomic features that are common to DRRs from a CBCT image and a single AP radiography of the pelvis for use in a completely automated and accurate 2D-3D registration framework for setup verification in prostate radiotherapy. This technique is theoretically applicable to other rigid bony structures such as the cranial vault or skull base and piecewise rigid structures such as the spine.
Collapse
Affiliation(s)
- Reshma Munbodh
- Department of Electrical Engineering, Yale University, New Haven, Connecticut 06520, USA.
| | | | | | | | | | | | | |
Collapse
|
46
|
Skerl D, Tomazevic D, Likar B, Pernus F. Evaluation of similarity measures for reconstruction-based registration in image-guided radiotherapy and surgery. Int J Radiat Oncol Biol Phys 2006; 65:943-53. [PMID: 16751077 DOI: 10.1016/j.ijrobp.2006.03.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2005] [Revised: 03/02/2006] [Accepted: 03/02/2006] [Indexed: 11/16/2022]
Abstract
PURPOSE A promising patient positioning technique is based on registering computed tomographic (CT) or magnetic resonance (MR) images to cone-beam CT images (CBCT). The extra radiation dose delivered to the patient can be substantially reduced by using fewer projections. This approach results in lower quality CBCT images. The purpose of this study is to evaluate a number of similarity measures (SMs) suitable for registration of CT or MR images to low-quality CBCTs. METHODS AND MATERIALS Using the recently proposed evaluation protocol, we evaluated nine SMs with respect to pretreatment imaging modalities, number of two-dimensional (2D) images used for reconstruction, and number of reconstruction iterations. The image database consisted of 100 X-ray and corresponding CT and MR images of two vertebral columns. RESULTS Using a higher number of 2D projections or reconstruction iterations results in higher accuracy and slightly lower robustness. The similarity measures that behaved the best also yielded the best registration results. The most appropriate similarity measure was the asymmetric multi-feature mutual information (AMMI). CONCLUSIONS The evaluation protocol proved to be a valuable tool for selecting the best similarity measure for the reconstruction-based registration. The results indicate that accurate and robust CT/CBCT or even MR/CBCT registrations are possible if the AMMI similarity measure is used.
Collapse
Affiliation(s)
- Darko Skerl
- Department of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | | |
Collapse
|
47
|
Khamene A, Bloch P, Wein W, Svatos M, Sauer F. Automatic registration of portal images and volumetric CT for patient positioning in radiation therapy. Med Image Anal 2006; 10:96-112. [PMID: 16150629 DOI: 10.1016/j.media.2005.06.002] [Citation(s) in RCA: 87] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2004] [Revised: 08/12/2004] [Accepted: 06/10/2005] [Indexed: 11/17/2022]
Abstract
The efficacy of radiation therapy treatment depends on the patient setup accuracy at each daily fraction. A significant problem is reproducing the patient position during treatment planning for every fraction of the treatment process. We propose and evaluate an intensity based automatic registration method using multiple portal images and the pre-treatment CT volume. We perform both geometric and radiometric calibrations to generate high quality digitally reconstructed radiographs (DRRs) that can be compared against portal images acquired right before treatment dose delivery. We use a graphics processing unit (GPU) to generate the DRRs in order to gain computational efficiency. We also perform a comparative study on various similarity measures and optimization procedures. Simple similarity measure such as local normalized correlation (LNC) performs best as long as the radiometric calibration is carefully done. Using the proposed method, we achieved better than 1mm average error in repositioning accuracy for a series of phantom studies using two open field (i.e., 41 cm2) portal images with 90 degrees vergence angle.
Collapse
Affiliation(s)
- Ali Khamene
- Imaging and Visualization Department, Siemens Corporate Research, Inc., 755 College Road East, Princeton, NJ 08540, USA.
| | | | | | | | | |
Collapse
|
48
|
Tomazevic D, Likar B, Pernus F. 3-D/2-D registration by integrating 2-D information in 3-D. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:17-27. [PMID: 16398411 DOI: 10.1109/tmi.2005.859715] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
In image-guided therapy, high-quality preoperative images serve for planning and simulation, and intraoperatively as "background", onto which models of surgical instruments or radiation beams are projected. The link between a preoperative image and intraoperative physical space of the patient is established by image-to-patient registration. In this paper, we present a novel 3-D/2-D registration method. First, a 3-D image is reconstructed from a few 2-D X-ray images and next, the preoperative 3-D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure (SM). Because the quality of the reconstructed image is generally low, we introduce a novel SM, which is able to cope with low image quality as well as with different imaging modalities. The novel 3-D/2-D registration method has been evaluated and compared to the gradient-based method (GBM) using standardized evaluation methodology and publicly available 3-D computed tomography (CT), 3-D rotational X-ray (3DRX), and magnetic resonance (MR) and 2-D X-ray images of two spine phantoms, for which gold standard registrations were known. For each of the 3DRX, CT, or MR images and each set of X-ray images, 1600 registrations were performed from starting positions, defined as the mean target registration error (mTRE), randomly generated and uniformly distributed in the interval of 0-20 mm around the gold standard. The capture range was defined as the distance from gold standard for which the final TRE was less than 2 mm in at least 95% of all cases. In terms of success rate, as the function of initial misalignment and capture range the proposed method outperformed the GBM. TREs of the novel method and the GBM were approximately the same. For the registration of 3DRX and CT images to X-ray images as few as 2-3 X-ray views were sufficient to obtain approximately 0.4 mm TREs, 7-9 mm capture range, and 80%-90% of successful registrations. To obtain similar results for MR to X-ray registrations, an image, reconstructed from at least 11 X-ray images was required. Reconstructions from more than 11 images had no effect on the registration results.
Collapse
Affiliation(s)
- Dejan Tomazevic
- University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, 1000 Ljubljana, Slovenia.
| | | | | |
Collapse
|
49
|
Rohlfing T, Denzler J, Grässl C, Russakoff DB, Maurer CR. Markerless real-time 3-D target region tracking by motion backprojection from projection images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1455-68. [PMID: 16279082 DOI: 10.1109/tmi.2005.857651] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.
Collapse
Affiliation(s)
- Torsten Rohlfing
- Neuroscience Program at SRI International, 333 Ravenswood Avenue, Menlo Park, CA 94025-3493, USA.
| | | | | | | | | |
Collapse
|
50
|
Russakoff DB, Rohlfing T, Mori K, Rueckert D, Ho A, Adler JR, Maurer CR. Fast generation of digitally reconstructed radiographs using attenuation fields with application to 2D-3D image registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1441-54. [PMID: 16279081 DOI: 10.1109/tmi.2005.856749] [Citation(s) in RCA: 64] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Generation of digitally reconstructed radiographs (DRRs) is computationally expensive and is typically the rate-limiting step in the execution time of intensity-based two-dimensional to three-dimensional (2D-3D) registration algorithms. We address this computational issue by extending the technique of light field rendering from the computer graphics community. The extension of light fields, which we call attenuation fields (AFs), allows most of the DRR computation to be performed in a preprocessing step; after this precomputation step, DRRs can be generated substantially faster than with conventional ray casting. We derive expressions for the physical sizes of the two planes of an AF necessary to generate DRRs for a given X-ray camera geometry and all possible object motion within a specified range. Because an AF is a ray-based data structure, it is substantially more memory efficient than a huge table of precomputed DRRs because it eliminates the redundancy of replicated rays. Nonetheless, an AF can require substantial memory, which we address by compressing it using vector quantization. We compare DRRs generated using AFs (AF-DRRs) to those generated using ray casting (RC-DRRs) for a typical C-arm geometry and computed tomography images of several anatomic regions. They are quantitatively very similar: the median peak signal-to-noise ratio of AF-DRRs versus RC-DRRs is greater than 43 dB in all cases. We perform intensity-based 2D-3D registration using AF-DRRs and RC-DRRs and evaluate registration accuracy using gold-standard clinical spine image data from four patients. The registration accuracy and robustness of the two methods is virtually identical whereas the execution speed using AF-DRRs is an order of magnitude faster.
Collapse
Affiliation(s)
- Daniel B Russakoff
- Department of Computer Science, Stanford University, Stanford, CA 94305 USA.
| | | | | | | | | | | | | |
Collapse
|