1
|
Dong G, Dai J, Li N, Zhang C, He W, Liu L, Chan Y, Li Y, Xie Y, Liang X. 2D/3D Non-Rigid Image Registration via Two Orthogonal X-ray Projection Images for Lung Tumor Tracking. Bioengineering (Basel) 2023; 10:bioengineering10020144. [PMID: 36829638 PMCID: PMC9951849 DOI: 10.3390/bioengineering10020144] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 01/24/2023] Open
Abstract
Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.
Collapse
Affiliation(s)
- Guoya Dong
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300130, China
- Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin 300130, China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Tianjin 300130, China
| | - Jingjing Dai
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300130, China
- Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin 300130, China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Tianjin 300130, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yunhui Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Correspondence:
| |
Collapse
|
2
|
Uneri A, Wu P, Jones CK, Vagdargi P, Han R, Helm PA, Luciano MG, Anderson WS, Siewerdsen JH. Deformable 3D-2D registration for high-precision guidance and verification of neuroelectrode placement. Phys Med Biol 2021; 66. [PMID: 34644684 DOI: 10.1088/1361-6560/ac2f89] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate neuroelectrode placement is essential to effective monitoring or stimulation of neurosurgery targets. This work presents and evaluates a method that combines deep learning and model-based deformable 3D-2D registration to guide and verify neuroelectrode placement using intraoperative imaging.Methods.The registration method consists of three stages: (1) detection of neuroelectrodes in a pair of fluoroscopy images using a deep learning approach; (2) determination of correspondence and initial 3D localization among neuroelectrode detections in the two projection images; and (3) deformable 3D-2D registration of neuroelectrodes according to a physical device model. The method was evaluated in phantom, cadaver, and clinical studies in terms of (a) the accuracy of neuroelectrode registration and (b) the quality of metal artifact reduction (MAR) in cone-beam CT (CBCT) in which the deformably registered neuroelectrode models are taken as input to the MAR.Results.The combined deep learning and model-based deformable 3D-2D registration approach achieved 0.2 ± 0.1 mm accuracy in cadaver studies and 0.6 ± 0.3 mm accuracy in clinical studies. The detection network and 3D correspondence provided initialization of 3D-2D registration within 2 mm, which facilitated end-to-end registration runtime within 10 s. Metal artifacts, quantified as the standard deviation in voxel values in tissue adjacent to neuroelectrodes, were reduced by 72% in phantom studies and by 60% in first clinical studies.Conclusions.The method combines the speed and generalizability of deep learning (for initialization) with the precision and reliability of physical model-based registration to achieve accurate deformable 3D-2D registration and MAR in functional neurosurgery. Accurate 3D-2D guidance from fluoroscopy could overcome limitations associated with deformation in conventional navigation, and improved MAR could improve CBCT verification of neuroelectrode placement.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - C K Jones
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P A Helm
- Medtronic, Littleton, MA 01460, United States of America
| | - M G Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America.,Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| |
Collapse
|
3
|
Cai Y, Wu S, Fan X, Olson J, Evans L, Lollis S, Mirza SK, Paulsen KD, Ji S. A level-wise spine registration framework to account for large pose changes. Int J Comput Assist Radiol Surg 2021; 16:943-953. [PMID: 33973113 PMCID: PMC8358825 DOI: 10.1007/s11548-021-02395-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 04/29/2021] [Indexed: 11/27/2022]
Abstract
PURPOSES Accurate and efficient spine registration is crucial to success of spine image guidance. However, changes in spine pose cause intervertebral motion that can lead to significant registration errors. In this study, we develop a geometrical rectification technique via nonlinear principal component analysis (NLPCA) to achieve level-wise vertebral registration that is robust to large changes in spine pose. METHODS We used explanted porcine spines and live pigs to develop and test our technique. Each sample was scanned with preoperative CT (pCT) in an initial pose and rescanned with intraoperative stereovision (iSV) in a different surgical posture. Patient registration rectified arbitrary spinal postures in pCT and iSV into a common, neutral pose through a parameterized moving-frame approach. Topologically encoded depth projection 2D images were then generated to establish invertible point-to-pixel correspondences. Level-wise point correspondences between pCT and iSV vertebral surfaces were generated via 2D image registration. Finally, closed-form vertebral level-wise rigid registration was obtained by directly mapping 3D surface point pairs. Implanted mini-screws were used as fiducial markers to measure registration accuracy. RESULTS In seven explanted porcine spines and two live animal surgeries (maximum in-spine pose change of 87.5 mm and 32.7 degrees averaged from all spines), average target registration errors (TRE) of 1.70 ± 0.15 mm and 1.85 ± 0.16 mm were achieved, respectively. The automated spine rectification took 3-5 min, followed by an additional 30 secs for depth image projection and level-wise registration. CONCLUSIONS Accuracy and efficiency of the proposed level-wise spine registration support its application in human open spine surgeries. The registration framework, itself, may also be applicable to other intraoperative imaging modalities such as ultrasound and MRI, which may expand utility of the approach in spine registration in general.
Collapse
Affiliation(s)
- Yunliang Cai
- Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA, 01609, USA
| | - Shaoju Wu
- Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA, 01609, USA
| | - Xiaoyao Fan
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Jonathan Olson
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Linton Evans
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Scott Lollis
- University of Vermont Medical Center, Burlington, VT, 05401, USA
| | - Sohail K Mirza
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Keith D Paulsen
- Dartmouth College Dartmouth-Hitchcock Medical Center, 1 Medical Center Dr, Lebanon, NH, 03766, USA
| | - Songbai Ji
- Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA, 01609, USA.
| |
Collapse
|
4
|
Schaffert R, Wang J, Fischer P, Borsdorf A, Maier A. Learning an Attention Model for Robust 2-D/3-D Registration Using Point-To-Plane Correspondences. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3159-3174. [PMID: 32305908 DOI: 10.1109/tmi.2020.2988410] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Minimally invasive procedures rely on image guidance for navigation at the operation site to avoid large surgical incisions. X-ray images are often used for guidance, but important structures may be not well visible. These structures can be overlaid from pre-operative 3-D images and accurate alignment can be established using 2-D/3-D registration. Registration based on the point-to-plane correspondence model was recently proposed and shown to achieve state-of-the-art performance. However, registration may still fail in challenging cases due to a large portion of outliers. In this paper, we describe a learning-based correspondence weighting scheme to improve the registration performance. By learning an attention model, inlier correspondences get higher attention in the motion estimation while the outlier correspondences are suppressed. Instead of using per-correspondence labels, our objective function allows to train the model directly by minimizing the registration error. We demonstrate a highly increased robustness, e.g. increasing the success rate from 84.9% to 97.0% for spine registration. In contrast to previously proposed learning-based methods, we also achieve a high accuracy of around 0.5mm mean re-projection distance. In addition, our method requires a relatively small amount of training data, is able to learn from simulated data, and generalizes to images with additional structures which are not present during training. Furthermore, a single model can be trained for both, different views and different anatomical structures.
Collapse
|
5
|
Frysch R, Pfeiffer T, Rose G. A novel approach to 2D/3D registration of X-ray images using Grangeat's relation. Med Image Anal 2020; 67:101815. [PMID: 33065470 DOI: 10.1016/j.media.2020.101815] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 07/31/2020] [Accepted: 09/02/2020] [Indexed: 11/19/2022]
Abstract
Fast and accurate 2D/3D registration plays an important role in many applications, ranging from scientific and engineering domains all the way to medical care. Today's predominant methods are based on computationally expensive approaches, such as virtual forward or back projections, that limit the real-time applicability of the routines. Here, we present a novel concept that makes use of Grangeat's relation to intertwine information from the 3D volume and the 2D projection space in a way that allows pre-computation of all time-intensive steps. The main effort within actual registration tasks is reduced to simple resampling of the pre-calculated values, which can be executed rapidly on modern GPU hardware. We analyze the applicability of the proposed method on simulated data under various conditions and evaluate the findings on real data from a C-arm CT scanner. Our results show high registration quality in both simulated as well as real data scenarios and demonstrate a reduction in computation time for the crucial computation step by a factor of six to eight when compared to state-of-the-art routines. With minor trade-offs in accuracy, this speed-up can even be increased up to a factor of 100 in particular settings. To our knowledge, this is the first application of Grangeat's relation to the topic of 2D/3D registration. Due to its high computational efficiency and broad range of potential applications, we believe it constitutes a highly relevant approach for various problems dealing with cone beam transmission images.
Collapse
Affiliation(s)
- Robert Frysch
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany.
| | - Tim Pfeiffer
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Georg Rose
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| |
Collapse
|