1
|
Ma YQ, Reynolds T, Ehtiati T, Weiss C, Hong K, Theodore N, Gang GJ, Stayman JW. Fully automatic online geometric calibration for non-circular cone-beam CT orbits using fiducials with unknown placement. Med Phys 2024; 51:3245-3264. [PMID: 38573172 DOI: 10.1002/mp.17041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 04/05/2024] Open
Abstract
BACKGROUND Cone-beam CT (CBCT) with non-circular scanning orbits can improve image quality for 3D intraoperative image guidance. However, geometric calibration of such scans can be challenging. Existing methods typically require a prior image, specialized phantoms, presumed repeatable orbits, or long computation time. PURPOSE We propose a novel fully automatic online geometric calibration algorithm that does not require prior knowledge of fiducial configuration. The algorithm is fast, accurate, and can accommodate arbitrary scanning orbits and fiducial configurations. METHODS The algorithm uses an automatic initialization process to eliminate human intervention in fiducial localization and an iterative refinement process to ensure robustness and accuracy. We provide a detailed explanation and implementation of the proposed algorithm. Physical experiments on a lab test bench and a clinical robotic C-arm scanner were conducted to evaluate spatial resolution performance and robustness under realistic constraints. RESULTS Qualitative and quantitative results from the physical experiments demonstrate high accuracy, efficiency, and robustness of the proposed method. The spatial resolution performance matched that of our existing benchmark method, which used a 3D-2D registration-based geometric calibration algorithm. CONCLUSIONS We have demonstrated an automatic online geometric calibration method that delivers high spatial resolution and robustness performance. This methodology enables arbitrary scan trajectories and should facilitate translation of such acquisition methods in a clinical setting.
Collapse
Affiliation(s)
- Yiqun Q Ma
- Johns Hopkins University, Baltimore, Maryland, USA
| | - Tess Reynolds
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | | | | | - Kelvin Hong
- Johns Hopkins University, Baltimore, Maryland, USA
| | | | | | | |
Collapse
|
2
|
Frisk H, Burström G, Persson O, El-Hajj VG, Coronado L, Hager S, Edström E, Elmi-Terander A. Automatic image registration on intraoperative CBCT compared to Surface Matching registration on preoperative CT for spinal navigation: accuracy and workflow. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03076-4. [PMID: 38378987 DOI: 10.1007/s11548-024-03076-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 02/09/2024] [Indexed: 02/22/2024]
Abstract
INTRODUCTION Spinal navigation solutions have been slower to develop compared to cranial ones. To facilitate greater adoption and use of spinal navigation, the relatively cumbersome registration processes need to be improved upon. This study aims to validate a new solution for automatic image registration and compare it to a traditional Surface Matching method. METHOD Adult patients undergoing spinal surgery requiring navigation were enrolled after providing consent. A registration matrix-Universal AIR (= Automatic Image Registration)-was placed in the surgical field and used for automatic registration based on intraoperative 3D imaging. A standard Surface Matching method was used for comparison. Accuracy measurements were obtained by comparing planned and acquired coordinates on the vertebrae. RESULTS Thirty-nine patients with 42 datasets were included. The mean accuracy of Universal AIR registration was 1.20 ± 0.42 mm, while the mean accuracy of Surface Matching registration was 1.94 ± 0.64 mm. Universal AIR registration was non-inferior to Surface Matching registration. Post hoc analysis showed a significantly greater accuracy for Universal AIR registration. In Surface Matching, but not automatic registration, user-related errors such as incorrect identification of the vertebral level were seen. CONCLUSION Automatic image registration for spinal navigation using Universal AIR and intraoperative 3D imaging provided improved accuracy compared to Surface Matching registration. In addition, it minimizes user errors and offers a standardized workflow, making it a reliable registration method for navigated spinal procedures.
Collapse
Affiliation(s)
- Henrik Frisk
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden.
| | - Gustav Burström
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden
| | - Oscar Persson
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden
| | | | | | | | - Erik Edström
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden
- Capio Spine Center Stockholm, Löwenströmska Hospital, Upplands-Väsby, Sweden
| | - Adrian Elmi-Terander
- Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden
- Capio Spine Center Stockholm, Löwenströmska Hospital, Upplands-Väsby, Sweden
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| |
Collapse
|
3
|
Lei L, Tang H, Zhang J, Wu Y, Zhao B, Hu Y, Li B. Automatic registration and precise tumour localization method for robot-assisted puncture procedure under inconsistent breath-holding conditions. Int J Med Robot 2021; 17:e2319. [PMID: 34379863 DOI: 10.1002/rcs.2319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 08/03/2021] [Accepted: 08/09/2021] [Indexed: 11/09/2022]
Abstract
BACKGROUND During percutaneous puncture procedure, breath holding is subjectively controlled by patients, and it is difficult to ensure consistent tumour position between the preoperative CT scanning phase and the intraoperative puncture phase. In addition, the manual registration process is time-consuming and has low accuracy. METHODS We have proposed an automatic registration method using optical markers and a tumour breath-holding position estimation model based on the support vector regression algorithm. A robot system and a tumour respiratory motion simulation platform are built to perform puncture tests under different breath-holding states. RESULTS The experimental results show that automatic registration has higher accuracy than manual registration, and with the tumour breath-holding position estimation model, the targeting accuracy of puncture under inconsistent breath-holding conditions is greatly improved. CONCLUSIONS The proposed automatic registration and tumour breath-holding position estimation model can improve the accuracy and efficiency of puncture under inconsistent breath-holding conditions.
Collapse
Affiliation(s)
- Long Lei
- Department of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Huajie Tang
- Department of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jiawei Zhang
- Department of Urology, Shenzhen University General Hospital, Shenzhen, China
| | - Yuqi Wu
- Department of Urology, Shenzhen University General Hospital, Shenzhen, China
| | - Baoliang Zhao
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ying Hu
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Pazhou Lab, Guangzhou, China
| | - Bing Li
- Department of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen, China
| |
Collapse
|
4
|
CHEN XINRONG, YANG FUMING, ZHANG ZIQUN, BAI BAODAN, GUO LEI. ROBUST SURFACE-MATCHING REGISTRATION BASED ON THE STRUCTURE INFORMATION FOR IMAGE-GUIDED NEUROSURGERY SYSTEM. J MECH MED BIOL 2021. [DOI: 10.1142/s0219519421400091] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Image-to-patient space registration is to make the accurate alignment between the actual operating space and the image space. Although the image-to-patient space registration using paired-point is used in some image-guided neurosurgery systems, the current paired-point registration method has some drawbacks and usually cannot achieve the best registration result. Therefore, surface-matching registration is proposed to solve this problem. This paper proposes a surface-matching method that accomplishes image-to-patient space registration automatically. We represent the surface point clouds by the Gaussian Mixture Model (GMM), which can smoothly approximate the probability density distribution of an arbitrary point set. We also use mutual information as the similarity measure between the point clouds and take into account the structure information of the points. To analyze the registration error, we introduce a method for the estimation of Target Registration Error (TRE) by generating simulated data. In the experiments, we used the point sets of the cranium surface and the model of the human head determined by a CT and laser scanner. The TRE was less than 2[Formula: see text]mm, and the TRE had better accuracy in the front and the posterior region. Compared to the Iterative Closest Point algorithm, the surface registration based on GMM and the structure information of the points proved superior in registration robustness and accurate implementation of image-to-patient registration.
Collapse
Affiliation(s)
- XINRONG CHEN
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, P. R. China
- Shanghai Key Laboratory of Medical Image, Computing and Computer Assisted Intervention, Shanghai 200032, P. R. China
| | - FUMING YANG
- Huashan Hospital, Fudan University, Shanghai 200040, P. R. China
| | - ZIQUN ZHANG
- Information Center, Fudan University, Shanghai 200433, P. R. China
| | - BAODAN BAI
- School of Medical Instruments, Shanghai University of Medicine & Health Science, Shanghai 201318, P. R. China
| | - LEI GUO
- School of Business Administration, Shanghai Lixin University of Accounting and Finance, Shanghai 201620, P. R. China
| |
Collapse
|
5
|
Vagdargi P, Sheth N, Sisniega A, Uneri A, De Silva T, Osgood GM, Siewerdsen JH. Drill-mounted video guidance for orthopaedic trauma surgery. J Med Imaging (Bellingham) 2021; 8:015002. [PMID: 33604409 DOI: 10.1117/1.jmi.8.1.015002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 01/19/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Percutaneous fracture fixation is a challenging procedure that requires accurate interpretation of fluoroscopic images to insert guidewires through narrow bone corridors. We present a guidance system with a video camera mounted onboard the surgical drill to achieve real-time augmentation of the drill trajectory in fluoroscopy and/or CT. Approach: The camera was mounted on the drill and calibrated with respect to the drill axis. Markers identifiable in both video and fluoroscopy are placed about the surgical field and co-registered by feature correspondences. If available, a preoperative CT can also be co-registered by 3D-2D image registration. Real-time guidance is achieved by virtual overlay of the registered drill axis on fluoroscopy or in CT. Performance was evaluated in terms of target registration error (TRE), conformance within clinically relevant pelvic bone corridors, and runtime. Results: Registration of the drill axis to fluoroscopy demonstrated median TRE of 0.9 mm and 2.0 deg when solved with two views (e.g., anteroposterior and lateral) and five markers visible in both video and fluoroscopy-more than sufficient to provide Kirschner wire (K-wire) conformance within common pelvic bone corridors. Registration accuracy was reduced when solved with a single fluoroscopic view ( TRE = 3.4 mm and 2.7 deg) but was also sufficient for K-wire conformance within pelvic bone corridors. Registration was robust with as few as four markers visible within the field of view. Runtime of the initial implementation allowed fluoroscopy overlay and/or 3D CT navigation with freehand manipulation of the drill up to 10 frames / s . Conclusions: A drill-mounted video guidance system was developed to assist with K-wire placement. Overall workflow is compatible with fluoroscopically guided orthopaedic trauma surgery and does not require markers to be placed in preoperative CT. The initial prototype demonstrates accuracy and runtime that could improve the accuracy of K-wire placement, motivating future work for translation to clinical studies.
Collapse
Affiliation(s)
- Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Niral Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Alejandro Sisniega
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Tharindu De Silva
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Greg M Osgood
- Johns Hopkins Medicine, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States.,Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
6
|
Grupp RB, Murphy RJ, Hegeman RA, Alexander CP, Unberath M, Otake Y, McArthur BA, Armand M, Taylor RH. Fast and automatic periacetabular osteotomy fragment pose estimation using intraoperatively implanted fiducials and single-view fluoroscopy. Phys Med Biol 2020; 65:245019. [PMID: 32590372 DOI: 10.1088/1361-6560/aba089] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Accurate and consistent mental interpretation of fluoroscopy to determine the position and orientation of acetabular bone fragments in 3D space is difficult. We propose a computer assisted approach that uses a single fluoroscopic view and quickly reports the pose of an acetabular fragment without any user input or initialization. Intraoperatively, but prior to any osteotomies, two constellations of metallic ball-bearings (BBs) are injected into the wing of a patient's ilium and lateral superior pubic ramus. One constellation is located on the expected acetabular fragment, and the other is located on the remaining, larger, pelvis fragment. The 3D locations of each BB are reconstructed using three fluoroscopic views and 2D/3D registrations to a preoperative CT scan of the pelvis. The relative pose of the fragment is established by estimating the movement of the two BB constellations using a single fluoroscopic view taken after osteotomy and fragment relocation. BB detection and inter-view correspondences are automatically computed throughout the processing pipeline. The proposed method was evaluated on a multitude of fluoroscopic images collected from six cadaveric surgeries performed bilaterally on three specimens. Mean fragment rotation error was 2.4 ± 1.0 degrees, mean translation error was 2.1 ± 0.6 mm, and mean 3D lateral center edge angle error was 1.0 ± 0.5 degrees. The average runtime of the single-view pose estimation was 0.7 ± 0.2 s. The proposed method demonstrates accuracy similar to other state of the art systems which require optical tracking systems or multiple-view 2D/3D registrations with manual input. The errors reported on fragment poses and lateral center edge angles are within the margins required for accurate intraoperative evaluation of femoral head coverage.
Collapse
Affiliation(s)
- R B Grupp
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
7
|
Fotouhi J, Fuerst B, Unberath M, Reichenstein S, Lee SC, Johnson AA, Osgood GM, Armand M, Navab N. Automatic intraoperative stitching of nonoverlapping cone-beam CT acquisitions. Med Phys 2018; 45:2463-2475. [PMID: 29569728 DOI: 10.1002/mp.12877] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Revised: 03/05/2018] [Accepted: 03/05/2018] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and nonoverlapping CBCT volumes to enable 3D measurements on large anatomical structures. METHODS A CBCT-capable mobile C-arm is augmented with a red-green-blue-depth (RGBD) camera. An offline cocalibration of the two imaging modalities results in coregistered video, infrared, and x-ray views of the surgical scene. Then, automatic stitching of multiple small, nonoverlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. RESULTS On an animal cadaver, we show stitching errors as low as 0.33, 0.91, and 1.72 mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. CONCLUSIONS The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures.
Collapse
Affiliation(s)
- Javad Fotouhi
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Bernhard Fuerst
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Stefan Reichenstein
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Sing Chun Lee
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Alex A Johnson
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Greg M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Mehran Armand
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA.,Applied Physics Laboratory, Johns Hopkins University, Laurel, MD, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| |
Collapse
|
8
|
Yi T, Ramchandran V, Siewerdsen JH, Uneri A. Robotic drill guide positioning using known-component 3D-2D image registration. J Med Imaging (Bellingham) 2018; 5:021212. [PMID: 29430481 DOI: 10.1117/1.jmi.5.2.021212] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 01/04/2018] [Indexed: 11/14/2022] Open
Abstract
A method for x-ray image-guided robotic instrument positioning is reported and evaluated in preclinical studies of spinal pedicle screw placement with the aim of improving delivery of transpedicle K-wires and screws. The known-component (KC) registration algorithm was used to register the three-dimensional patient CT and drill guide surface model to intraoperative two-dimensional radiographs. Resulting transformations, combined with offline hand-eye calibration, drive the robotically held drill guide to target trajectories defined in the preoperative CT. The method was assessed in comparison with a more conventional tracker-based approach, and robustness to clinically realistic errors was tested in phantom and cadaver. Deviations from planned trajectories were analyzed in terms of target registration error (TRE) at the tooltip (mm) and approach angle (deg). In phantom studies, the KC approach resulted in [Formula: see text] and [Formula: see text], comparable with accuracy in tracker-based approach. In cadaver studies with realistic anatomical deformation, the KC approach yielded [Formula: see text] and [Formula: see text], with statistically significant improvement versus tracker ([Formula: see text] and [Formula: see text]). Robustness to deformation is attributed to relatively local rigidity of anatomy in radiographic views. X-ray guidance offered accurate robotic positioning and could fit naturally within clinical workflow of fluoroscopically guided procedures.
Collapse
Affiliation(s)
- Thomas Yi
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Vignesh Ramchandran
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
9
|
Dang H, Stayman JW, Xu J, Zbijewski W, Sisniega A, Mow M, Wang X, Foos DH, Aygun N, Koliatsos VE, Siewerdsen JH. Task-based statistical image reconstruction for high-quality cone-beam CT. Phys Med Biol 2017; 62:8693-8719. [PMID: 28976368 DOI: 10.1088/1361-6560/aa90fd] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR-viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ([Formula: see text]). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in [Formula: see text], and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
Collapse
Affiliation(s)
- Hao Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
10
|
Marinetto E, Uneri A, De Silva T, Reaungamornrat S, Zbijewski W, Sisniega A, Vogt S, Kleinszig G, Pascau J, Siewerdsen JH. Integration of free-hand 3D ultrasound and mobile C-arm cone-beam CT: Feasibility and characterization for real-time guidance of needle insertion. Comput Med Imaging Graph 2017; 58:13-22. [PMID: 28414927 DOI: 10.1016/j.compmedimag.2017.03.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2016] [Revised: 12/16/2016] [Accepted: 03/28/2017] [Indexed: 12/27/2022]
Abstract
This work presents development of an integrated ultrasound (US)-cone-beam CT (CBCT) system for image-guided needle interventions, combining a low-cost ultrasound system (Interson VC 7.5MHz, Pleasanton, CA) with a mobile C-arm for fluoroscopy and CBCT via use of a surgical tracker. Imaging performance of the ultrasound system was characterized in terms of depth-dependent contrast-to-noise ratio (CNR) and spatial resolution. US-CBCT system was evaluated in phantom studies simulating three needle-based procedures: drug delivery, tumor ablation, and lumbar puncture. Low-cost ultrasound provided flexibility but exhibited modest CNR and spatial resolution that is likely limited to fairly superficial applications within a ∼10cm depth of view. Needle tip localization demonstrated target registration error 2.1-3.0mm using fiducial-based registration.
Collapse
Affiliation(s)
- E Marinetto
- Departmento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain; Department of Biomedical Engineering, Johns Hopkins University, MD, USA
| | - A Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, MD, USA
| | - S Reaungamornrat
- Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, MD, USA
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, MD, USA
| | - S Vogt
- Siemens Healthcare XP Division, Erlangen, Germany
| | - G Kleinszig
- Siemens Healthcare XP Division, Erlangen, Germany
| | - J Pascau
- Departmento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, MD, USA; Department of Computer Science, Johns Hopkins University, Baltimore, USA.
| |
Collapse
|
11
|
Dang H, Stayman JW, Sisniega A, Zbijewski W, Xu J, Wang X, Foos DH, Aygun N, Koliatsos VE, Siewerdsen JH. Multi-resolution statistical image reconstruction for mitigation of truncation effects: application to cone-beam CT of the head. Phys Med Biol 2016; 62:539-559. [PMID: 28033118 DOI: 10.1088/1361-6560/aa52b8] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.
Collapse
Affiliation(s)
- Hao Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD 21205, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
12
|
Zhang W, Wang X, Zhang J, Shen G. Application of preoperative registration and automatic tracking technique for image-guided maxillofacial surgery. Comput Assist Surg (Abingdon) 2016; 21:137-142. [PMID: 27973961 DOI: 10.1080/24699322.2016.1187767] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
Affiliation(s)
- Wenbin Zhang
- Department of Oral and Cranio-maxillofacial Science, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Huang Pu District, Shanghai, China
| | - Xudong Wang
- Department of Oral and Cranio-maxillofacial Science, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Huang Pu District, Shanghai, China
| | - Jianfei Zhang
- Department of Oral and Cranio-maxillofacial Science, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Huang Pu District, Shanghai, China
| | - Guofang Shen
- Department of Oral and Cranio-maxillofacial Science, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Huang Pu District, Shanghai, China
| |
Collapse
|
13
|
Madan H, Pernuš F, Likar B, Špiclin Ž. A framework for automatic creation of gold-standard rigid 3D–2D registration datasets. Int J Comput Assist Radiol Surg 2016; 12:263-275. [DOI: 10.1007/s11548-016-1482-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Accepted: 08/31/2016] [Indexed: 10/21/2022]
|
14
|
Pourmorteza A, Dang H, Siewerdsen JH, Stayman JW. Reconstruction of difference in sequential CT studies using penalized likelihood estimation. Phys Med Biol 2016; 61:1986-2002. [PMID: 26894795 PMCID: PMC4948746 DOI: 10.1088/0031-9155/61/5/1986] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Characterization of anatomical change and other differences is important in sequential computed tomography (CT) imaging, where a high-fidelity patient-specific prior image is typically present, but is not used, in the reconstruction of subsequent anatomical states. Here, we introduce a penalized likelihood (PL) method called reconstruction of difference (RoD) to directly reconstruct a difference image volume using both the current projection data and the (unregistered) prior image integrated into the forward model for the measurement data. The algorithm utilizes an alternating minimization to find both the registration and reconstruction estimates. This formulation allows direct control over the image properties of the difference image, permitting regularization strategies that inhibit noise and structural differences due to inconsistencies between the prior image and the current data. Additionally, if the change is known to be local, RoD allows local acquisition and reconstruction, as opposed to traditional model-based approaches that require a full support field of view (or other modifications). We compared the performance of RoD to a standard PL algorithm, in simulation studies and using test-bench cone-beam CT data. The performances of local and global RoD approaches were similar, with local RoD providing a significant computational speedup. In comparison across a range of data with differing fidelity, the local RoD approach consistently showed lower error (with respect to a truth image) than PL in both noisy data and sparsely sampled projection scenarios. In a study of the prior image registration performance of RoD, a clinically reasonable capture ranges were demonstrated. Lastly, the registration algorithm had a broad capture range and the error for reconstruction of CT data was 35% and 20% less than filtered back-projection for RoD and PL, respectively. The RoD has potential for delivering high-quality difference images in a range of sequential clinical scenarios including image-guided surgeries and treatments where accurate and quantitative assessments of anatomical change is desired.
Collapse
Affiliation(s)
- A Pourmorteza
- Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20814, USA
| | - H Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - J W Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
15
|
Otake Y, Wang AS, Uneri A, Kleinszig G, Vogt S, Aygun N, Lo SFL, Wolinsky JP, Gokaslan ZL, Siewerdsen JH. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation. Phys Med Biol 2016; 60:2075-90. [PMID: 25674851 DOI: 10.1088/0031-9155/60/5/2075] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely 'LevelCheck') to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical product) in a manner consistent with natural surgical workflow.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
16
|
Dang H, Siewerdsen JH, Stayman JW. Prospective regularization design in prior-image-based reconstruction. Phys Med Biol 2015; 60:9515-36. [PMID: 26606653 PMCID: PMC4833649 DOI: 10.1088/0031-9155/60/24/9515] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
Collapse
Affiliation(s)
- Hao Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | |
Collapse
|
17
|
Dang H, Wang AS, Sussman MS, Siewerdsen JH, Stayman JW. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images. Phys Med Biol 2014; 59:4799-826. [PMID: 25097144 PMCID: PMC4142353 DOI: 10.1088/0031-9155/59/17/4799] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
Collapse
Affiliation(s)
- H Dang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD 21205, USA
| | | | | | | | | |
Collapse
|