1
|
Huang Y, Zhang X, Hu Y, Johnston AR, Jones CK, Zbijewski WB, Siewerdsen JH, Helm PA, Witham TF, Uneri A. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis. Comput Med Imaging Graph 2024; 114:102365. [PMID: 38471330 DOI: 10.1016/j.compmedimag.2024.102365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE Improved integration and use of preoperative imaging during surgery hold significant potential for enhancing treatment planning and instrument guidance through surgical navigation. Despite its prevalent use in diagnostic settings, MR imaging is rarely used for navigation in spine surgery. This study aims to leverage MR imaging for intraoperative visualization of spine anatomy, particularly in cases where CT imaging is unavailable or when minimizing radiation exposure is essential, such as in pediatric surgery. METHODS This work presents a method for deformable 3D-2D registration of preoperative MR images with a novel intraoperative long-length tomosynthesis imaging modality (viz., Long-Film [LF]). A conditional generative adversarial network is used to translate MR images to an intermediate bone image suitable for registration, followed by a model-based 3D-2D registration algorithm to deformably map the synthesized images to LF images. The algorithm's performance was evaluated on cadaveric specimens with implanted markers and controlled deformation, and in clinical images of patients undergoing spine surgery as part of a large-scale clinical study on LF imaging. RESULTS The proposed method yielded a median 2D projection distance error of 2.0 mm (interquartile range [IQR]: 1.1-3.3 mm) and a 3D target registration error of 1.5 mm (IQR: 0.8-2.1 mm) in cadaver studies. Notably, the multi-scale approach exhibited significantly higher accuracy compared to rigid solutions and effectively managed the challenges posed by piecewise rigid spine deformation. The robustness and consistency of the method were evaluated on clinical images, yielding no outliers on vertebrae without surgical instrumentation and 3% outliers on vertebrae with instrumentation. CONCLUSIONS This work constitutes the first reported approach for deformable MR to LF registration based on deep image synthesis. The proposed framework provides access to the preoperative annotations and planning information during surgery and enables surgical navigation within the context of MR images and/or dual-plane LF images.
Collapse
Affiliation(s)
- Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Ashley R Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
2
|
Johnston A, Mahesh M, Uneri A, Rypinski TA, Boone JM, Siewerdsen JH. Objective image quality assurance in cone-beam CT: Test methods, analysis, and workflow in longitudinal studies. Med Phys 2024; 51:2424-2443. [PMID: 38354310 DOI: 10.1002/mp.16983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 12/20/2023] [Accepted: 01/28/2024] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND Standards for image quality evaluation in multi-detector CT (MDCT) and cone-beam CT (CBCT) are evolving to keep pace with technological advances. A clear need is emerging for methods that facilitate rigorous quality assurance (QA) with up-to-date metrology and streamlined workflow suitable to a range of MDCT and CBCT systems. PURPOSE To evaluate the feasibility and workflow associated with image quality (IQ) assessment in longitudinal studies for MDCT and CBCT with a single test phantom and semiautomated analysis of objective, quantitative IQ metrology. METHODS A test phantom (CorgiTM Phantom, The Phantom Lab, Greenwich, New York, USA) was used in monthly IQ testing over the course of 1 year for three MDCT scanners (one of which presented helical and volumetric scan modes) and four CBCT scanners. Semiautomated software analyzed image uniformity, linearity, contrast, noise, contrast-to-noise ratio (CNR), 3D noise-power spectrum (NPS), modulation transfer function (MTF) in axial and oblique directions, and cone-beam artifact magnitude. The workflow was evaluated using methods adapted from systems/industrial engineering, including value stream process modeling (VSPM), standard work layout (SWL), and standard work control charts (SWCT) to quantify and optimize test methodology in routine practice. The completeness and consistency of DICOM data from each system was also evaluated. RESULTS Quantitative IQ metrology provided valuable insight in longitudinal quality assurance (QA), with metrics such as NPS and MTF providing insight on root cause for various forms of system failure-for example, detector calibration and geometric calibration. Monthly constancy testing showed variations in IQ test metrics owing to system performance as well as phantom setup and provided initial estimates of upper and lower control limits appropriate to QA action levels. Rigorous evaluation of QA workflow identified methods to reduce total cycle time to ∼10 min for each system-viz., use of a single phantom configuration appropriate to all scanners and Head or Body scan protocols. Numerous gaps in the completeness and consistency of DICOM data were observed for CBCT systems. CONCLUSION An IQ phantom and test methodology was found to be suitable to QA of MDCT and CBCT systems with streamlined workflow appropriate to busy clinical settings.
Collapse
Affiliation(s)
- Ashley Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mahadevappa Mahesh
- Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Tatiana A Rypinski
- Department of Imaging Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas, USA
| | - John M Boone
- Department of Radiology, University of California - Davis, Davis, California, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Imaging Physics, The University of Texas M. D. Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
3
|
Salehjahromi M, Karpinets TV, Sujit SJ, Qayati M, Chen P, Aminu M, Saad MB, Bandyopadhyay R, Hong L, Sheshadri A, Lin J, Antonoff MB, Sepesi B, Ostrin EJ, Toumazis I, Huang P, Cheng C, Cascone T, Vokes NI, Behrens C, Siewerdsen JH, Hazle JD, Chang JY, Zhang J, Lu Y, Godoy MCB, Chung C, Jaffray D, Wistuba I, Lee JJ, Vaporciyan AA, Gibbons DL, Gladish G, Heymach JV, Wu CC, Zhang J, Wu J. Synthetic PET from CT improves diagnosis and prognosis for lung cancer: Proof of concept. Cell Rep Med 2024; 5:101463. [PMID: 38471502 PMCID: PMC10983039 DOI: 10.1016/j.xcrm.2024.101463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 09/07/2023] [Accepted: 02/15/2024] [Indexed: 03/14/2024]
Abstract
[18F]Fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) are indispensable components in modern medicine. Although PET can provide additional diagnostic value, it is costly and not universally accessible, particularly in low-income countries. To bridge this gap, we have developed a conditional generative adversarial network pipeline that can produce FDG-PET from diagnostic CT scans based on multi-center multi-modal lung cancer datasets (n = 1,478). Synthetic PET images are validated across imaging, biological, and clinical aspects. Radiologists confirm comparable imaging quality and tumor contrast between synthetic and actual PET scans. Radiogenomics analysis further proves that the dysregulated cancer hallmark pathways of synthetic PET are consistent with actual PET. We also demonstrate the clinical values of synthetic PET in improving lung cancer diagnosis, staging, risk prediction, and prognosis. Taken together, this proof-of-concept study testifies to the feasibility of applying deep learning to obtain high-fidelity PET translated from CT.
Collapse
Affiliation(s)
| | | | - Sheeba J Sujit
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Mohamed Qayati
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Pingjun Chen
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Muhammad Aminu
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Maliazurina B Saad
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | | | - Lingzhi Hong
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA; Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Ajay Sheshadri
- Department of Pulmonary Medicine, MD Anderson Cancer Center, Houston, TX USA
| | - Julie Lin
- Department of Pulmonary Medicine, MD Anderson Cancer Center, Houston, TX USA
| | - Mara B Antonoff
- Department of Thoracic and Cardiovascular Surgery, MD Anderson Cancer Center, Houston, TX, USA
| | - Boris Sepesi
- Department of Thoracic and Cardiovascular Surgery, MD Anderson Cancer Center, Houston, TX, USA
| | - Edwin J Ostrin
- Department of General Internal Medicine, MD Anderson Cancer Center, Houston, TX, USA
| | - Iakovos Toumazis
- Department of Health Services Research, MD Anderson Cancer Center, Houston, TX, USA
| | - Peng Huang
- Department of Oncology, The Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins, Baltimore, MD, USA
| | - Chao Cheng
- Institute for Clinical and Translational Research, Baylor College of Medicine, Houston, TX, USA
| | - Tina Cascone
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Natalie I Vokes
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Carmen Behrens
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Jeffrey H Siewerdsen
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA; Institute for Data Science in Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - John D Hazle
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA
| | - Joe Y Chang
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Jianhua Zhang
- Department of Genomic Medicine, MD Anderson Cancer Center, Houston, TX, USA
| | - Yang Lu
- Department of Nuclear Medicine, MD Anderson Cancer Center, Houston, TX, USA
| | - Myrna C B Godoy
- Department of Thoracic Imaging, MD Anderson Cancer Center, Houston, TX, USA
| | - Caroline Chung
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA; Institute for Data Science in Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - David Jaffray
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA; Institute for Data Science in Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Ignacio Wistuba
- Department of Translational Molecular Pathology, MD Anderson Cancer Center, Houston, TX, USA
| | - J Jack Lee
- Department of Biostatistics, MD Anderson Cancer Center, Houston, TX, USA
| | - Ara A Vaporciyan
- Department of Thoracic and Cardiovascular Surgery, MD Anderson Cancer Center, Houston, TX, USA
| | - Don L Gibbons
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Gregory Gladish
- Department of Thoracic Imaging, MD Anderson Cancer Center, Houston, TX, USA
| | - John V Heymach
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Carol C Wu
- Department of Thoracic Imaging, MD Anderson Cancer Center, Houston, TX, USA
| | - Jianjun Zhang
- Department of Genomic Medicine, MD Anderson Cancer Center, Houston, TX, USA; Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA; Lung Cancer Genomics Program, MD Anderson Cancer Center, Houston, TX, USA; Lung Cancer Interception Program, MD Anderson Cancer Center, Houston, TX, USA
| | - Jia Wu
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, USA; Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX, USA; Institute for Data Science in Oncology, MD Anderson Cancer Center, Houston, TX, USA.
| |
Collapse
|
4
|
Liu SZ, Herbst M, Schaefer J, Weber T, Vogt S, Ritschl L, Kappler S, Kawcak CE, Stewart HL, Siewerdsen JH, Zbijewski W. Feasibility of bone marrow edema detection using dual-energy cone-beam computed tomography. Med Phys 2024; 51:1653-1673. [PMID: 38323878 DOI: 10.1002/mp.16962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 12/17/2023] [Accepted: 01/16/2024] [Indexed: 02/08/2024] Open
Abstract
BACKGROUND Dual-energy (DE) detection of bone marrow edema (BME) would be a valuable new diagnostic capability for the emerging orthopedic cone-beam computed tomography (CBCT) systems. However, this imaging task is inherently challenging because of the narrow energy separation between water (edematous fluid) and fat (health yellow marrow), requiring precise artifact correction and dedicated material decomposition approaches. PURPOSE We investigate the feasibility of BME assessment using kV-switching DE CBCT with a comprehensive CBCT artifact correction framework and a two-stage projection- and image-domain three-material decomposition algorithm. METHODS DE CBCT projections of quantitative BME phantoms (water containers 100-165 mm in size with inserts presenting various degrees of edema) and an animal cadaver model of BME were acquired on a CBCT test bench emulating the standard wrist imaging configuration of a Multitom Rax twin robotic x-ray system. The slow kV-switching scan protocol involved a 60 kV low energy (LE) beam and a 120 kV high energy (HE) beam switched every 0.5° over a 200° angular span. The DE CBCT data preprocessing and artifact correction framework consisted of (i) projection interpolation onto matched LE and HE projections views, (ii) lag and glare deconvolutions, and (iii) efficient Monte Carlo (MC)-based scatter correction. Virtual non-calcium (VNCa) images for BME detection were then generated by projection-domain decomposition into an Aluminium (Al) and polyethylene basis set (to remove beam hardening) followed by three-material image-domain decomposition into water, Ca, and fat. Feasibility of BME detection was quantified in terms of VNCa image contrast and receiver operating characteristic (ROC) curves. Robustness to object size, position in the field of view (FOV) and beam collimation (varied 20-160 mm) was investigated. RESULTS The MC-based scatter correction delivered > 69% reduction of cupping artifacts for moderate to wide collimations (> 80 mm beam width), which was essential to achieve accurate DE material decomposition. In a forearm-sized object, a 20% increase in water concentration (edema) of a trabecular bone-mimicking mixture presented as ∼15 HU VNCa contrast using 80-160 mm beam collimations. The variability with respect to object position in the FOV was modest (< 15% coefficient of variation). The areas under the ROC curve were > 0.9. A femur-sized object presented a somewhat more challenging task, resulting in increased sensitivity to object positioning at 160 mm collimation. In animal cadaver specimens, areas of VNCa enhancement consistent with BME were observed in DE CBCT images in regions of MRI-confirmed edema. CONCLUSION Our results indicate that the proposed artifact correction and material decomposition pipeline can overcome the challenges of scatter and limited spectral separation to achieve relatively accurate and sensitive BME detection in DE CBCT. This study provides an important baseline for clinical translation of musculoskeletal DE CBCT to quantitative, point-of-care bone health assessment.
Collapse
Affiliation(s)
- Stephen Z Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | | | | | | | | | | | | | - Christopher E Kawcak
- Department of Clinical Sciences, Colorado State University College of Veterinary Medicine and Biomedical Sciences, Fort Collins, Colorado, USA
| | - Holly L Stewart
- Department of Clinical Sciences, Colorado State University College of Veterinary Medicine and Biomedical Sciences, Fort Collins, Colorado, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, Texas, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
5
|
Butz I, Fernandez M, Uneri A, Theodore N, Anderson WS, Siewerdsen JH. Performance assessment of surgical tracking systems based on statistical process control and longitudinal QA. Comput Assist Surg (Abingdon) 2023; 28:2275522. [PMID: 37942523 DOI: 10.1080/24699322.2023.2275522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023] Open
Abstract
A system for performance assessment and quality assurance (QA) of surgical trackers is reported based on principles of geometric accuracy and statistical process control (SPC) for routine longitudinal testing. A simple QA test phantom was designed, where the number and distribution of registration fiducials was determined drawing from analytical models for target registration error (TRE). A tracker testbed was configured with open-source software for measurement of a TRE-based accuracy metric ε and Jitter (J ). Six trackers were tested: 2 electromagnetic (EM - Aurora); and 4 infrared (IR - 1 Spectra, 1 Vega, and 2 Vicra) - all NDI (Waterloo, ON). Phase I SPC analysis of Shewhart mean (x ¯ ) and standard deviation (s ) determined system control limits. Phase II involved weekly QA of each system for up to 32 weeks and identified Pass, Note, Alert, and Failure action rules. The process permitted QA in <1 min. Phase I control limits were established for all trackers: EM trackers exhibited higher upper control limits than IR trackers in ε (EM: x ¯ ε ∼ 2.8-3.3 mm, IR: x ¯ ε ∼ 1.6-2.0 mm) and Jitter (EM: x ¯ jitter ∼ 0.30-0.33 mm, IR: x ¯ jitter ∼ 0.08-0.10 mm), and older trackers showed evidence of degradation - e.g. higher Jitter for the older Vicra (p-value < .05). Phase II longitudinal tests yielded 676 outcomes in which a total of 4 Failures were noted - 3 resolved by intervention (metal interference for EM trackers) - and 1 owing to restrictive control limits for a new system (Vega). Weekly tests also yielded 40 Notes and 16 Alerts - each spontaneously resolved in subsequent monitoring.
Collapse
Affiliation(s)
- I Butz
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M Fernandez
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - N Theodore
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - W S Anderson
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Neurology and Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
6
|
Mayo CS, Feng MU, Brock KK, Kudner R, Balter P, Buchsbaum JC, Caissie A, Covington E, Daugherty EC, Dekker AL, Fuller CD, Hallstrom AL, Hong DS, Hong JC, Kamran SC, Katsoulakis E, Kildea J, Krauze AV, Kruse JJ, McNutt T, Mierzwa M, Moreno A, Palta JR, Popple R, Purdie TG, Richardson S, Sharp GC, Satomi S, Tarbox LR, Venkatesan AM, Witztum A, Woods KE, Yao Y, Farahani K, Aneja S, Gabriel PE, Hadjiiski L, Ruan D, Siewerdsen JH, Bratt S, Casagni M, Chen S, Christodouleas JC, DiDonato A, Hayman J, Kapoor R, Kravitz S, Sebastian S, Von Siebenthal M, Bosch W, Hurkmans C, Yom SS, Xiao Y. Operational Ontology for Oncology (O3): A Professional Society-Based, Multistakeholder, Consensus-Driven Informatics Standard Supporting Clinical and Research Use of Real-World Data From Patients Treated for Cancer. Int J Radiat Oncol Biol Phys 2023; 117:533-550. [PMID: 37244628 PMCID: PMC10741247 DOI: 10.1016/j.ijrobp.2023.05.033] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 05/17/2023] [Accepted: 05/19/2023] [Indexed: 05/29/2023]
Abstract
PURPOSE The ongoing lack of data standardization severely undermines the potential for automated learning from the vast amount of information routinely archived in electronic health records (EHRs), radiation oncology information systems, treatment planning systems, and other cancer care and outcomes databases. We sought to create a standardized ontology for clinical data, social determinants of health, and other radiation oncology concepts and interrelationships. METHODS AND MATERIALS The American Association of Physicists in Medicine's Big Data Science Committee was initiated in July 2019 to explore common ground from the stakeholders' collective experience of issues that typically compromise the formation of large inter- and intra-institutional databases from EHRs. The Big Data Science Committee adopted an iterative, cyclical approach to engaging stakeholders beyond its membership to optimize the integration of diverse perspectives from the community. RESULTS We developed the Operational Ontology for Oncology (O3), which identified 42 key elements, 359 attributes, 144 value sets, and 155 relationships ranked in relative importance of clinical significance, likelihood of availability in EHRs, and the ability to modify routine clinical processes to permit aggregation. Recommendations are provided for best use and development of the O3 to 4 constituencies: device manufacturers, centers of clinical care, researchers, and professional societies. CONCLUSIONS O3 is designed to extend and interoperate with existing global infrastructure and data science standards. The implementation of these recommendations will lower the barriers for aggregation of information that could be used to create large, representative, findable, accessible, interoperable, and reusable data sets to support the scientific objectives of grant programs. The construction of comprehensive "real-world" data sets and application of advanced analytical techniques, including artificial intelligence, holds the potential to revolutionize patient management and improve outcomes by leveraging increased access to information derived from larger, more representative data sets.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Dan Ruan
- University of California, Los Angeles
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Sue S Yom
- University of California, San Francisco
| | | |
Collapse
|
7
|
Stewart HL, Siewerdsen JH, Selberg KT, Bills KW, Kawcak CE. Cone-beam computed tomography produces images of numerically comparable diagnostic quality for bone and inferior quality for soft tissues compared with fan-beam computed tomography in cadaveric equine metacarpophalangeal joints. Vet Radiol Ultrasound 2023; 64:1033-1036. [PMID: 37947254 DOI: 10.1111/vru.13309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 10/17/2023] [Accepted: 10/18/2023] [Indexed: 11/12/2023] Open
Abstract
Cone-beam computed tomography (CBCT) is an emerging modality for imaging of the equine patient. The objective of this prospective, descriptive, exploratory study was to assess visualization tasks using CBCT compared with conventional fan-beam CT (FBCT) for imaging of the metacarpophalangeal joint in equine cadavers. Satisfaction scores were numerically excellent with both CBCT and FBCT for bone evaluation, and FBCT was numerically superior for soft tissue evaluation. Preference tests indicated FBCT was numerically superior for soft tissue evaluation, while preference test scoring for bone was observer-dependent. Findings from this study can be used as background for future studies evaluating CBCT image quality in live horses.
Collapse
Affiliation(s)
- Holly L Stewart
- Department of Clinical Studies, New Bolton Center, University of Pennsylvania, Kennett Square, Pennsylvania, USA
| | - Jeffrey H Siewerdsen
- Department of Imaging Physics, Neurosurgery, and Radiation Physics, The University of Texas M.D. Anderson Cancer Center, Houston, Texas, USA
| | - Kurt T Selberg
- Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins, Colorado, USA
| | - Kathryn W Bills
- Department of Clinical Studies, New Bolton Center, University of Pennsylvania, Kennett Square, Pennsylvania, USA
| | - Christopher E Kawcak
- Department of Clinical Sciences, Colorado State University, Fort Collins, Colorado, USA
| |
Collapse
|
8
|
Mekki L, Sheth NM, Vijayan RC, Rohleder M, Sisniega A, Kleinszig G, Vogt S, Kunze H, Osgood GM, Siewerdsen JH, Uneri A. Surgical navigation for guidewire placement from intraoperative fluoroscopy in orthopaedic surgery. Phys Med Biol 2023; 68:215001. [PMID: 37774711 DOI: 10.1088/1361-6560/acfec4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective. Surgical guidewires are commonly used in placing fixation implants to stabilize fractures. Accurate positioning of these instruments is challenged by difficulties in 3D reckoning from 2D fluoroscopy. This work aims to enhance the accuracy and reduce exposure times by providing 3D navigation for guidewire placement from as little as two fluoroscopic images.Approach. Our approach combines machine learning-based segmentation with the geometric model of the imager to determine the 3D poses of guidewires. Instrument tips are encoded as individual keypoints, and the segmentation masks are processed to estimate the trajectory. Correspondence between detections in multiple views is established using the pre-calibrated system geometry, and the corresponding features are backprojected to obtain the 3D pose. Guidewire 3D directions were computed using both an analytical and an optimization-based method. The complete approach was evaluated in cadaveric specimens with respect to potential confounding effects from the imaging geometry and radiographic scene clutter due to other instruments.Main results. The detection network identified the guidewire tips within 2.2 mm and guidewire directions within 1.1°, in 2D detector coordinates. Feature correspondence rejected false detections, particularly in images with other instruments, to achieve 83% precision and 90% recall. Estimating the 3D direction via numerical optimization showed added robustness to guidewires aligned with the gantry rotation plane. Guidewire tips and directions were localized in 3D world coordinates with a median accuracy of 1.8 mm and 2.7°, respectively.Significance. The paper reports a new method for automatic 2D detection and 3D localization of guidewires from pairs of fluoroscopic images. Localized guidewires can be virtually overlaid on the patient's pre-operative 3D scan during the intervention. Accurate pose determination for multiple guidewires from two images offers to reduce radiation dose by minimizing the need for repeated imaging and provides quantitative feedback prior to implant placement.
Collapse
Affiliation(s)
- L Mekki
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - N M Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - R C Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - M Rohleder
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | | | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | - H Kunze
- Siemens Healthineers, Erlangen, Germany
| | - G M Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| |
Collapse
|
9
|
Ding AS, Lu A, Li Z, Sahu M, Galaiya D, Siewerdsen JH, Unberath M, Taylor RH, Creighton FX. A Self-Configuring Deep Learning Network for Segmentation of Temporal Bone Anatomy in Cone-Beam CT Imaging. Otolaryngol Head Neck Surg 2023; 169:988-998. [PMID: 36883992 DOI: 10.1002/ohn.317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 01/19/2023] [Accepted: 02/19/2023] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Preoperative planning for otologic or neurotologic procedures often requires manual segmentation of relevant structures, which can be tedious and time-consuming. Automated methods for segmenting multiple geometrically complex structures can not only streamline preoperative planning but also augment minimally invasive and/or robot-assisted procedures in this space. This study evaluates a state-of-the-art deep learning pipeline for semantic segmentation of temporal bone anatomy. STUDY DESIGN A descriptive study of a segmentation network. SETTING Academic institution. METHODS A total of 15 high-resolution cone-beam temporal bone computed tomography (CT) data sets were included in this study. All images were co-registered, with relevant anatomical structures (eg, ossicles, inner ear, facial nerve, chorda tympani, bony labyrinth) manually segmented. Predicted segmentations from no new U-Net (nnU-Net), an open-source 3-dimensional semantic segmentation neural network, were compared against ground-truth segmentations using modified Hausdorff distances (mHD) and Dice scores. RESULTS Fivefold cross-validation with nnU-Net between predicted and ground-truth labels were as follows: malleus (mHD: 0.044 ± 0.024 mm, dice: 0.914 ± 0.035), incus (mHD: 0.051 ± 0.027 mm, dice: 0.916 ± 0.034), stapes (mHD: 0.147 ± 0.113 mm, dice: 0.560 ± 0.106), bony labyrinth (mHD: 0.038 ± 0.031 mm, dice: 0.952 ± 0.017), and facial nerve (mHD: 0.139 ± 0.072 mm, dice: 0.862 ± 0.039). Comparison against atlas-based segmentation propagation showed significantly higher Dice scores for all structures (p < .05). CONCLUSION Using an open-source deep learning pipeline, we demonstrate consistently submillimeter accuracy for semantic CT segmentation of temporal bone anatomy compared to hand-segmented labels. This pipeline has the potential to greatly improve preoperative planning workflows for a variety of otologic and neurotologic procedures and augment existing image guidance and robot-assisted systems for the temporal bone.
Collapse
Affiliation(s)
- Andy S Ding
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Manish Sahu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X Creighton
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
10
|
Abstract
Since its inception in the early 20th century, interventional radiology (IR) has evolved tremendously and is now a distinct clinical discipline with its own training pathway. The arsenal of modalities at work in IR includes x-ray radiography and fluoroscopy, CT, MRI, US, and molecular and multimodality imaging within hybrid interventional environments. This article briefly reviews the major developments in imaging technology in IR over the past century, summarizes technologies now representative of the standard of care, and reflects on emerging advances in imaging technology that could shape the field in the century ahead. The role of emergent imaging technologies in enabling high-precision interventions is also briefly reviewed, including image-guided ablative therapies.
Collapse
Affiliation(s)
- Kristy K Brock
- From the Departments of Imaging Physics (K.K.B., J.H.S.), Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Stephen R Chen
- From the Departments of Imaging Physics (K.K.B., J.H.S.), Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Rahul A Sheth
- From the Departments of Imaging Physics (K.K.B., J.H.S.), Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Jeffrey H Siewerdsen
- From the Departments of Imaging Physics (K.K.B., J.H.S.), Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400 Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| |
Collapse
|
11
|
Wu P, Tersol A, Clackdoyle R, Boone JM, Siewerdsen JH. Cone-beam CT sampling incompleteness: analytical and empirical studies of emerging systems and source-detector orbits. J Med Imaging (Bellingham) 2023; 10:033503. [PMID: 37292190 PMCID: PMC10246836 DOI: 10.1117/1.jmi.10.3.033503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 05/06/2023] [Accepted: 05/19/2023] [Indexed: 06/10/2023] Open
Abstract
Purpose Motivated by emerging cone-beam computed tomography (CBCT) systems and scan orbits, we aim to quantitatively assess the completeness of data for 3D image reconstruction-in turn, related to "cone-beam artifacts." Fundamental principles of cone-beam sampling incompleteness are considered with respect to an analytical figure-of-merit [FOM, denoted tan(ψmin)] and related to an empirical FOM (denoted zmod) for measurement of cone-beam artifact magnitude in a test phantom. Approach A previously proposed analytical FOM [tan(ψmin), defined as the minimum angle between a point in the 3D image reconstruction and the x-ray source over the scan orbit] was analyzed for a variety of CBCT geometries. A physical test phantom was configured with parallel disk pairs (perpendicular to the z-axis) at various locations throughout the field of view, quantifying cone-beam artifact magnitude in terms of zmod (the relative signal modulation between the disks). Two CBCT systems were considered: an interventional C-arm (Cios Spin 3D; Siemens Healthineers, Forcheim Germany) and a musculoskeletal extremity scanner; Onsight3D, Carestream Health, Rochester, United States)]. Simulations and physical experiments were conducted for various source-detector orbits: (a) a conventional 360 deg circular orbit, (b) tilted and untilted semi-circular (196 deg) orbits, (c) multi-source (three x-ray sources distributed along the z axis) semi-circular orbits, and (d) a non-circular (sine-on-sphere, SoS) orbit. The incompleteness of sampling [tan(ψmin)] and magnitude of cone-beam artifacts (zmod) were evaluated for each system and orbit. Results The results show visually and quantitatively the effect of system geometry and scan orbit on cone-beam sampling effects, demonstrating the relationship between analytical tan(ψmin) and empirical zmod. Advanced source-detector orbits (e.g., three-source and SoS orbits) exhibited superior sampling completeness as quantified by both the analytical and the empirical FOMs. The test phantom and zmod metric were sensitive to variations in CBCT system geometry and scan orbit and provided a surrogate measure of underlying sampling completeness. Conclusion For a given system geometry and source-detector orbit, cone-beam sampling completeness can be quantified analytically (in terms arising from Tuy's condition) and/or empirically (using a test phantom for quantification of cone-beam artifacts). Such analysis provides theoretical and practical insight on sampling effects and the completeness of data for emerging CBCT systems and scan trajectories.
Collapse
Affiliation(s)
- Pengwei Wu
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Aina Tersol
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Rolf Clackdoyle
- Université Grenoble Alpes, CNRS, Grenoble INP, TIMC Laboratory, Grenoble, France
| | - John M. Boone
- University of California – Davis, Department of Radiology, Sacramento, California, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- The University of Texas M. D. Anderson Cancer Center, Department of Imaging Physics, Houston, Texas, United States
| |
Collapse
|
12
|
Ghinda CD, Stewart R, Totis F, Siewerdsen JH, Anderson WS. Customized External Cranioplasty for Management of Syndrome of Trephined in Nonsurgical Candidates. Oper Neurosurg (Hagerstown) 2023:01787389-990000000-00673. [PMID: 37039593 DOI: 10.1227/ons.0000000000000700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 01/26/2023] [Indexed: 04/12/2023] Open
Abstract
BACKGROUND Craniectomies represent a lifesaving neurosurgical procedure for many severe neurological conditions, such as traumatic brain injury. Syndrome of trephined (SoT) is an important complication of decompressive craniectomy, and cranial reconstruction is the definitive treatment. However, many patients cannot undergo surgical intervention because of neurological status, healing of the primary surgical wound, or the presence of concurrent infection, which may prevent cranioplasty. OBJECTIVE To offer a customized external cranioplasty option for managing skull deformities for patients who could not undergo surgical intervention for definitive cranioplasty. METHODS We describe the design and clinical application of an external cranioplasty for a patient with a medical history of intractable epilepsy, for which she underwent multiple right cerebral resections with a large resultant skull defect and SoT. RESULTS The patient had resolution of symptoms and restoration of a symmetrical skull contour with no complication at 17 months. CONCLUSION Customized external cranioplasty can improve symptoms associated with SoT for patients who cannot undergo a definitive cranioplasty. In addition, inset monitoring options, such as electroencephalography or telemetric intracranial pressure sensors, could be incorporated in the future for comprehensive monitoring of the patient's neurological condition.
Collapse
Affiliation(s)
- Cristina D Ghinda
- Department of Neurosurgery, OhioHealth Mansfield Hospital, Mansfield, Ohio, USA
- Functional Neurosurgery Laboratory, Department of Neurosurgery, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Ryan Stewart
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Francesca Totis
- Functional Neurosurgery Laboratory, Department of Neurosurgery, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
- Faculty of Medicine, Humanitas University, Milan, Italy
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - William S Anderson
- Functional Neurosurgery Laboratory, Department of Neurosurgery, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
13
|
Zhang X, Sisniega A, Zbijewski WB, Lee J, Jones CK, Wu P, Han R, Uneri A, Vagdargi P, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Combining physics-based models with deep learning image synthesis and uncertainty in intraoperative cone-beam CT of the brain. Med Phys 2023; 50:2607-2624. [PMID: 36906915 PMCID: PMC10175241 DOI: 10.1002/mp.16351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Image-guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. PURPOSE To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL-Recon) was proposed for improved intraoperative cone-beam CT (CBCT) image quality. METHODS The DL-Recon framework combines physics-based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT-to-CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL-Recon image combines the synthetic CT with an artifact-corrected filtered back-projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL-Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL-Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning- and physics-based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL-Recon in clinical data. RESULTS CBCT images reconstructed via FBP with physics-based corrections exhibited the usual challenges to soft-tissue contrast resolution due to image non-uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft-tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL-Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%-22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. CONCLUSIONS DL-Recon leveraged uncertainty estimation to combine the strengths of DL and physics-based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft-tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image-guided neurosurgery.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Junghoon Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, Maryland, USA
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - William S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, Maryland, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
14
|
Lu A, Huang H, Hu Y, Zbijewski W, Unberath M, Siewerdsen JH, Weiss CR, Sisniega A. Deformable Motion Compensation for Intraprocedural Vascular Cone-beam CT with Sequential Projection Domain Targeting and Vessel-Enhancing Autofocus. Proc SPIE Int Soc Opt Eng 2023; 12466:124660P. [PMID: 37937266 PMCID: PMC10629230 DOI: 10.1117/12.2652137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
Purpose Cone-beam CT (CBCT) is used in interventional radiology (IR) for identification of complex vascular anatomy, difficult to visualize in 2D fluoroscopy. However, long acquisition time makes CBCT susceptible to soft-tissue deformable motion that degrades visibility of fine vessels. We propose a targeted framework to compensate for deformable intra-scan motion via learned full-sequence models for identification of vascular anatomy coupled to an autofocus function specifically tailored to vascular imaging. Methods The vessel-targeted autofocus acts in two stages: (i) identification of vascular and catheter targets in the projection domain; and, (ii) autofocus optimization for a 4D vector field through an objective function that quantifies vascular visibility. Target identification is based on a deep learning model that operates on the complete sequence of projections, via a transformer encoder-decoder architecture that uses spatial-temporal self-attention modules to infer long-range feature correlations, enabling identification of vascular anatomy with highly variable conspicuity. The vascular autofocus function is derived through eigenvalues of the local image Hessian, which quantify the local image structure for identification of bright tubular structures. Motion compensation was achieved via spatial transformer operators that impart time dependent deformations to NPAR = 90 partial angle reconstructions, allowing for efficient minimization via gradient backpropagation. The framework was trained and evaluated in synthetic abdominal CBCTs obtained from liver MDCT volumes and including realistic models of contrast-enhanced vascularity with 15 to 30 end branches, 1 - 3.5 mm vessel diameter, and 1400 HU contrast. Results The targeted autofocus resulted in qualitative and quantitative improvement in vascular visibility in both simulated and clinical intra-procedural CBCT. The transformer-based target identification module resulted in superior detection of target vascularity and a lower number of false positives, compared to a baseline U-Net model acting on individual projection views, reflected as a 1.97x improvement in intersection-over-union values. Motion compensation in simulated data yielded improved conspicuity of vascular anatomy, and reduced streak artifacts and blurring around vessels, as well as recovery of shape distortion. These improvements amounted to an average 147% improvement in cross correlation computed against the motion-free ground truth, relative to the un-compensated reconstruction. Conclusion Targeted autofocus yielded improved visibility of vascular anatomy in abdominal CBCT, providing better potential for intra-procedural tracking of fine vascular anatomy in 3D images. The proposed method poses an efficient solution to motion compensation in task-specific imaging, with future application to a wider range of imaging scenarios.
Collapse
Affiliation(s)
- Alexander Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Heyuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Wojtek Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Departments of Imaging Physics, Neurosurgery, and Radiation Physics, The University of Texas M.D. Anderson Cancer Center, TX, USA
| | - Clifford R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
15
|
Huang H, Siewerdsen JH, Lu A, Hu Y, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Multi-Stage Adaptive Spline Autofocus (MASA) with a Learned Metric for Deformable Motion Compensation in Interventional Cone-Beam CT. Proc SPIE Int Soc Opt Eng 2023; 12463:1246314. [PMID: 37937146 PMCID: PMC10629227 DOI: 10.1117/12.2654361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
Purpose Cone-beam CT (CBCT) is widespread in abdominal interventional imaging, but its long acquisition time makes it susceptible to patient motion. Image-based autofocus has shown success in CBCT deformable motion compensation, via deep autofocus metrics and multi-region optimization, but it is challenged by the large parameter dimensionality required to capture intricate motion trajectories. This work leverages the differentiable nature of deep autofocus metrics to build a novel optimization strategy, Multi-Stage Adaptive Spine Autofocus (MASA), for compensation of complex deformable motion in abdominal CBCT. Methods MASA poses the autofocus problem as a multi-stage adaptive sampling strategy of the motion trajectory, sampled with Hermite spline basis with variable amplitude and knot temporal positioning. The adaptive method permits simultaneous optimization of the sampling phase, local temporal sampling density, and time-dependent amplitude of the motion trajectory. The optimization is performed in a multi-stage schedule with increasing number of knots that progressively accommodates complex trajectories in late stages, preconditioned by coarser components from early stages, and with minimal increase in dimensionality. MASA was evaluated in controlled simulation experiments with two types of motion trajectories: i) combinations of slow drifts with sudden jerk (sigmoid) motion; and ii) combinations of periodic motion sources of varying frequency into multi-frequency trajectories. Further validation was obtained in clinical data from liver CBCT featuring motion of contrast-enhanced vessels, and soft-tissue structures. Results The adaptive sampling strategy provided successful motion compensation in sigmoid trajectories, compared to fixed sampling strategies (mean SSIM increase of 0.026 compared to 0.011). Inspection of the estimated motion showed the capability of MASA to automatically allocate larger sampling density to parts of the scan timeline featuring sudden motion, effectively accommodating complex motion without increasing the problem dimension. Experiments on multi-frequency trajectories with 3-stage MASA (5, 10, and 15 knots) yielded a twofold SSIM increase compared to single-stage autofocus with 15 knots (0.076 vs 0.040, respectively). Application of MASA to clinical datasets resulted in simultaneous improvement on the delineation of both contrast-enhanced vessels and soft-tissue structures in the liver. Conclusion A new autofocus framework, MASA, was developed including a novel multi-stage technique for adaptive temporal sampling of the motion trajectory in combination with fully differentiable deep autofocus metrics. This novel adaptive sampling approach is a crucial step for application of deformable motion compensation to complex temporal motion trajectories.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX USA
| | - A Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - Y Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
16
|
Shi G, Quevedo Gonzalez FJ, Breighner RE, Carrino JA, Siewerdsen JH, Zbijewski W. Effects of non-stationary blur on texture biomarkers of bone using Ultra-High Resolution CT. Proc SPIE Int Soc Opt Eng 2023; 12468:1246813. [PMID: 38226358 PMCID: PMC10788132 DOI: 10.1117/12.2654304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Purpose To advance the development of radiomic models of bone quality using the recently introduced Ultra-High Resolution CT (UHR CT), we investigate inter-scan reproducibility of trabecular bone texture features to spatially-variant azimuthal and radial blurs associated with focal spot elongation and gantry rotation. Methods The UHR CT system features 250×250 μm detector pixels and an x-ray source with a 0.4×0.5 mm focal spot. Visualization of details down to ~150 μm has been reported for this device. A cadaveric femur was imaged on UHR CT at three radial locations within the field-of-view: 0 cm (isocenter), 9 cm from the isocenter, and 18 cm from the isocenter; we expect the non-stationary blurs to worsen with increasing radial displacement. Gray level cooccurrence (GLCM) and gray level run length (GLRLM) texture features were extracted from 237 trabecular regions of interest (ROIs, 5 cm diameter) placed at corresponding locations in the femoral head in scans obtained at the different shifts. We evaluated concordance correlation coefficient (CCC) between texture features at 0 cm (reference) and at 9 cm and 18 cm. We also investigated whether the spatially-variant blurs affect K-means clustering of trabecular bone ROIs based on their texture features. Results The average CCCs (against the 0 cm reference) for GLCM and GLRM features were ~0.7 at 9 cm. At 18 cm, the average CCCs were reduced to ~0.17 for GLCM and ~0.26 for GLRM. The non-stationary blurs are incorporated in radiomic features of cancellous bone, leading to inconsistencies in clustering of trabecular ROIs between different radial locations: an intersection-over-union overlap of corresponding (most similar) clusters between 0 cm and 9 cm shift was >70%, but dropped to <60% for the majority of corresponding clusters between 0 cm and 18 cm shift. Conclusion Non-stationary CT system blurs reduce inter-scan reproducibility of texture features of trabecular bone in UHR CT, especially for locations >15 cm from the isocenter. Radiomic models of bone quality derived from UHR CT measurements at isocenter might need to be revised before application in peripheral body sites such as the hips.
Collapse
Affiliation(s)
- G Shi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| | - F J Quevedo Gonzalez
- Department of Biomechanics, Hospital for Special Surgery, New York, NY USA 10021
| | - R E Breighner
- Department of Biomechanics, Hospital for Special Surgery, New York, NY USA 10021
| | - J A Carrino
- Hospital for Special Surgery, Radiology & Imaging, New York, NY USA 10021
| | | | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA 21205
| |
Collapse
|
17
|
Ibad HA, de Cesar Netto C, Shakoor D, Sisniega A, Liu S, Siewerdsen JH, Carrino JA, Zbijewski W, Demehri S. Computed Tomography: State-of-the-Art Advancements in Musculoskeletal Imaging. Invest Radiol 2023; 58:99-110. [PMID: 35976763 PMCID: PMC9742155 DOI: 10.1097/rli.0000000000000908] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
ABSTRACT Although musculoskeletal magnetic resonance imaging (MRI) plays a dominant role in characterizing abnormalities, novel computed tomography (CT) techniques have found an emerging niche in several scenarios such as trauma, gout, and the characterization of pathologic biomechanical states during motion and weight-bearing. Recent developments and advancements in the field of musculoskeletal CT include 4-dimensional, cone-beam (CB), and dual-energy (DE) CT. Four-dimensional CT has the potential to quantify biomechanical derangements of peripheral joints in different joint positions to diagnose and characterize patellofemoral instability, scapholunate ligamentous injuries, and syndesmotic injuries. Cone-beam CT provides an opportunity to image peripheral joints during weight-bearing, augmenting the diagnosis and characterization of disease processes. Emerging CBCT technologies improved spatial resolution for osseous microstructures in the quantitative analysis of osteoarthritis-related subchondral bone changes, trauma, and fracture healing. Dual-energy CT-based material decomposition visualizes and quantifies monosodium urate crystals in gout, bone marrow edema in traumatic and nontraumatic fractures, and neoplastic disease. Recently, DE techniques have been applied to CBCT, contributing to increased image quality in contrast-enhanced arthrography, bone densitometry, and bone marrow imaging. This review describes 4-dimensional CT, CBCT, and DECT advances, current logistical limitations, and prospects for each technique.
Collapse
Affiliation(s)
- Hamza Ahmed Ibad
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Cesar de Cesar Netto
- Department of Orthopaedics and Rehabilitation, Carver College of Medicine, University of Iowa, Iowa City, IA, USA
| | - Delaram Shakoor
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Stephen Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - John A. Carrino
- Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Shadpour Demehri
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
18
|
Vijayan R, Sheth N, Mekki L, Lu A, Uneri A, Sisniega A, Magaraggia J, Kleinszig G, Vogt S, Thiboutot J, Lee H, Yarmus L, Siewerdsen JH. 3D-2D image registration in the presence of soft-tissue deformation in image-guided transbronchial interventions. Phys Med Biol 2022; 68. [PMID: 36317269 DOI: 10.1088/1361-6560/ac9e3c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/27/2022] [Indexed: 11/06/2022]
Abstract
Purpose. Target localization in pulmonary interventions (e.g. transbronchial biopsy of a lung nodule) is challenged by deformable motion and may benefit from fluoroscopic overlay of the target to provide accurate guidance. We present and evaluate a 3D-2D image registration method for fluoroscopic overlay in the presence of tissue deformation using a multi-resolution/multi-scale (MRMS) framework with an objective function that drives registration primarily by soft-tissue image gradients.Methods. The MRMS method registers 3D cone-beam CT to 2D fluoroscopy without gating of respiratory phase by coarse-to-fine resampling and global-to-local rescaling about target regions-of-interest. A variation of the gradient orientation (GO) similarity metric (denotedGO') was developed to downweight bone gradients and drive registration via soft-tissue gradients. Performance was evaluated in terms of projection distance error at isocenter (PDEiso). Phantom studies determined nominal algorithm parameters and capture range. Preclinical studies used a freshly deceased, ventilated porcine specimen to evaluate performance in the presence of real tissue deformation and a broad range of 3D-2D image mismatch.Results. Nominal algorithm parameters were identified that provided robust performance over a broad range of motion (0-20 mm), including an adaptive parameter selection technique to accommodate unknown mismatch in respiratory phase. TheGO'metric yielded median PDEiso= 1.2 mm, compared to 6.2 mm for conventionalGO.Preclinical studies with real lung deformation demonstrated median PDEiso= 1.3 mm with MRMS +GO'registration, compared to 2.2 mm with a conventional transform. Runtime was 26 s and can be reduced to 2.5 s given a prior registration within ∼5 mm as initialization.Conclusions. MRMS registration via soft-tissue gradients achieved accurate fluoroscopic overlay in the presence of deformable lung motion. By driving registration via soft-tissue image gradients, the method avoided false local minima presented by bones and was robust to a wide range of motion magnitude.
Collapse
Affiliation(s)
- R Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - N Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - L Mekki
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | - S Vogt
- Siemens Healthineers, Erlangen, Germany
| | - J Thiboutot
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - H Lee
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - L Yarmus
- Division of Pulmonary and Critical Care Medicine, Johns Hopkins Medical Institution, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America.,Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| |
Collapse
|
19
|
Siewerdsen JH. Image quality models for 2D and 3D x-ray imaging systems: A perspective vignette. Med Phys 2022. [PMID: 36542332 DOI: 10.1002/mp.16051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/12/2022] [Accepted: 10/12/2022] [Indexed: 12/24/2022] Open
Abstract
Image quality models based on cascaded systems analysis and task-based imaging performance were an important aspect of the emergence of 2D and 3D digital x-ray systems over the last 25 years. This perspective vignette offers cursory review of such developments and personal insights that may not be obvious within previously published scientific literature. The vignette traces such models to the mid-1990s, when flat-panel x-ray detectors were emerging as a new base technology for digital radiography and benefited from the rigorous, objective characterization of imaging performance gained from such models. The connection of models for spatial resolution and noise to spatial-frequency-dependent descriptors of imaging task provided a useful framework for system optimization that helped to accelerate the development of new technologies to first clinical use. Extension of the models to new technologies and applications is also described, including dual-energy imaging, photon-counting detectors, phase contrast imaging, tomosynthesis, cone-beam CT, 3D image reconstruction, and image registration.
Collapse
Affiliation(s)
- Jeffrey H Siewerdsen
- Departments of Imaging Physics, Neurosurgery, and Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.,Director of Surgical Data Science, Institute for Data Science in Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
20
|
Huang Y, Jones CK, Zhang X, Johnston A, Waktola S, Aygun N, Witham TF, Bydon A, Theodore N, Helm PA, Siewerdsen JH, Uneri A. Multi-perspective region-based CNNs for vertebrae labeling in intraoperative long-length images. Comput Methods Programs Biomed 2022; 227:107222. [PMID: 36370597 DOI: 10.1016/j.cmpb.2022.107222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 10/31/2022] [Accepted: 11/02/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Effective aggregation of intraoperative x-ray images that capture the patient anatomy from multiple view-angles has the potential to enable and improve automated image analysis that can be readily performed during surgery. We present multi-perspective region-based neural networks that leverage knowledge of the imaging geometry for automatic vertebrae labeling in Long-Film images - a novel tomographic imaging modality with an extended field-of-view for spine imaging. METHOD A multi-perspective network architecture was designed to exploit small view-angle disparities produced by a multi-slot collimator and consolidate information from overlapping image regions. A second network incorporates large view-angle disparities to jointly perform labeling on images from multiple views (viz., AP and lateral). A recurrent module incorporates contextual information and enforce anatomical order for the detected vertebrae. The three modules are combined to form the multi-view multi-slot (MVMS) network for labeling vertebrae using images from all available perspectives. The network was trained on images synthesized from 297 CT images and tested on 50 AP and 50 lateral Long-Film images acquired from 13 cadaveric specimens. Labeling performance of the multi-perspective networks was evaluated with respect to the number of vertebrae appearances and presence of surgical instrumentation. RESULTS The MVMS network achieved an F1 score of >96% and an average vertebral localization error of 3.3 mm, with 88.3% labeling accuracy on both AP and lateral images - (15.5% and 35.0% higher than conventional Faster R-CNN on AP and lateral views, respectively). Aggregation of multiple appearances of the same vertebra using the multi-slot network significantly improved the labeling accuracy (p < 0.05). Using the multi-view network, labeling accuracy on the more challenging lateral views was improved to the same level as that of the AP views. The approach demonstrated robustness to the presence of surgical instrumentation, commonly encountered in intraoperative images, and achieved comparable performance in images with and without instrumentation (88.9% vs. 91.2% labeling accuracy). CONCLUSION The MVMS network demonstrated effective multi-perspective aggregation, providing means for accurate, automated vertebrae labeling during spine surgery. The algorithms may be generalized to other imaging tasks and modalities that involve multiple views with view-angle disparities (e.g., bi-plane radiography). Predicted labels can help avoid adverse events during surgery (e.g., wrong-level surgery), establish correspondence with labels in preoperative modalities to facilitate image registration, and enable automated measurement of spinal alignment metrics for intraoperative assessment of spinal curvature.
Collapse
Affiliation(s)
- Y Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - C K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore MD, United States
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - A Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - S Waktola
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States
| | - N Aygun
- Department of Radiology, Johns Hopkins Medicine, Baltimore MD, United States
| | - T F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - A Bydon
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - N Theodore
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States
| | - P A Helm
- Medtronic, Littleton MA, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States; Department of Computer Science, Johns Hopkins University, Baltimore MD, United States; Department of Radiology, Johns Hopkins Medicine, Baltimore MD, United States; Department of Neurosurgery, Johns Hopkins Medicine, Baltimore MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston TX, United States
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States.
| |
Collapse
|
21
|
Liu Y, Ota M, Han R, Siewerdsen JH, Liu TYA, Jones CK. Active shape model registration of ocular structures in computed tomography images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac9a98] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 10/14/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Purpose. The goal of this work is to create an active shape model segmentation method based on the statistical shape model of five regions of the globe on computed tomography (CT) scans and to use the method to categorize normal globe from globe injury. Methods. A set of 78 normal globes imaged with CT scans were manually segmented (vitreous cavity, lens, sclera, anterior chamber, and cornea) by two graders. A statistical shape model was created from the regions. An active shape model was trained using the manual segmentations and the statistical shape model and was assessed using leave-one-out cross validations. The active shape model was then applied to a set of globes with open globe injures, and the segmentations were compared to those of normal globes, in terms of the standard deviations away from normal. Results. The active shape model (ASM) segmentation compared well to ground truth, based on Dice similarity coefficient score in a leave-one-out experiment: 90.2% ± 2.1% for the cornea, 92.5% ± 3.5% for the sclera, 87.4% ± 3.7% for the vitreous cavity, 83.5% ± 2.3% for the anterior chamber, and 91.2% ± 2.4% for the lens. A preliminary set of CT scans of patients with open globe injury were segmented using the ASM and the shape of each region was quantified. The sclera and vitreous cavity were statistically different in shape from the normal. The Zone 1 and Zone 2 globes were statistically different than normal from the cornea and anterior chamber. Both results are consistent with the definition of the zonal injuries in OGI. Conclusion. The ASM results were found to be reproducible and accurately correlated with manual segmentations. The quantitative metrics derived from ASM of globes with OGI are consistent with existing medical knowledge in terms of structural deformation.
Collapse
|
22
|
Hrinivich WT, Chernavsky NE, Morcos M, Li T, Wu P, Wong J, Siewerdsen JH. Effect of subject motion and gantry rotation speed on image quality and dose delivery in CT-guided radiotherapy. Med Phys 2022; 49:6840-6855. [PMID: 35880711 DOI: 10.1002/mp.15877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 06/22/2022] [Accepted: 07/03/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE To investigate the effects of subject motion and gantry rotation speed on computed tomography (CT) image quality over a range of image acquisition speeds for fan-beam (FB) and cone-beam (CB) CT scanners, and quantify the geometric and dosimetric errors introduced by FB and CB sampling in the context of adaptive radiotherapy. METHODS Images of motion phantoms were acquired using four CT scanners with gantry rotation speeds of 0.5 s/rotation (denoted FB-0.5), 1.9 s/rotation (FB-1.9), 16.6 s/rotation (CB-16.6), and 60.0 s/rotation (CB-60.0). A phantom presenting various tissue densities undergoing motion with 4-s period and ranging in amplitude from ±0.5 to ±10.0 mm was used to characterize motion artifacts (streaks), motion blur (edge-spread function, ESF), and geometric inaccuracy (excursion of insert centroids and distortion of known shape). An anthropomorphic abdomen phantom undergoing ±2.5-mm motion with 4-s period was used to simulate an adaptive radiotherapy workflow, and relative geometric and dosimetric errors were compared between scanners. RESULTS At ±2.5-mm motion, phantom measurements demonstrated mean ± SD ESF widths of 0.6 ± 0.0, 1.3 ± 0.4, 2.0 ± 1.1, and 2.9 ± 2.0 mm and geometric inaccuracy (excursion) of 2.7 ± 0.4, 4.1 ± 1.2, 2.6 ± 0.7, and 2.0 ± 0.5 mm for the FB-0.5, FB-1.9, CB-16.6, and CB-60.0 scanners, respectively. The results demonstrated nonmonotonic trends with scanner speed for FB and CB geometries. Geometric and dosimetric errors in adaptive radiotherapy plans were largest for the slowest (CB-60.0) scanner and similar for the three faster systems (CB-16.6, FB-1.9, and FB-0.5). CONCLUSIONS Clinically standard CB-60.0 demonstrates strong image quality degradation in the presence of subject motion, which is mitigated through faster CBCT or FBCT. Although motion blur is minimized for FB-0.5 and FB-1.9, such systems suffer from increased geometric distortion compared to CB-16.6. Each system reflects tradeoffs in image artifacts and geometric inaccuracies that affect treatment delivery/dosimetric error and should be considered in the design of next-generation CT-guided radiotherapy systems.
Collapse
Affiliation(s)
- William T Hrinivich
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland, USA
| | - Nicole E Chernavsky
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Marc Morcos
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland, USA
| | - Taoran Li
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - John Wong
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
23
|
Beisemann N, Tilk AM, Gierse J, Grützner PA, Franke J, Siewerdsen JH, Vetter SY. Detection of fibular rotational changes in cone beam CT: experimental study in a specimen model. BMC Med Imaging 2022; 22:181. [PMID: 36261814 PMCID: PMC9583469 DOI: 10.1186/s12880-022-00913-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 10/10/2022] [Indexed: 11/21/2022] Open
Abstract
Background In syndesmotic injuries, incorrect reduction leads to early arthrosis of the ankle joint. Being able to analyze the reduction result is therefore crucial for obtaining an anatomical reduction. Several studies that assess fibular rotation in the incisura have already been published. The aim of the study was to validate measurement methods that use cone beam computed tomography imaging to detect rotational malpositions of the fibula in a standardized specimen model. Methods An artificial Maisonneuve injury was created on 16 pairs of fresh-frozen lower legs. Using a stable instrument, rotational malpositions of 5, 10, and 15° internal and external rotation were generated. For each malposition of the fibula, a cone beam computed tomography scan was performed. Subsequently, the malpositions were measured and statistically evaluated with t-tests using two measuring methods: angle (γ) at 10 mm proximal to the tibial joint line and the angle (δ) at 6 mm distal to the talar joint line. Results Rotational malpositions of ≥ 10° could be reliably displayed in the 3D images using the measuring method with angle δ. For angle γ significant results could only be displayed for an external rotation malposition of 15°. Conclusions Clinically relevant rotational malpositions of the fibula in comparison with an uninjured contralateral side can be reliably detected using intraoperative 3D imaging with a C-arm cone beam computed tomography. This may allow surgeons to achieve better reduction of fibular malpositions in the incisura tibiofibularis.
Collapse
Affiliation(s)
- Nils Beisemann
- MINTOS-Medical Imaging and Navigation in Trauma and Orthopaedic Surgery, BG Trauma Center Ludwigshafen at Heidelberg University Hospital, Ludwig-Guttmann-Str. 13, 67071, Ludwigshafen, Germany
| | - Antonella M Tilk
- MINTOS-Medical Imaging and Navigation in Trauma and Orthopaedic Surgery, BG Trauma Center Ludwigshafen at Heidelberg University Hospital, Ludwig-Guttmann-Str. 13, 67071, Ludwigshafen, Germany
| | - Jula Gierse
- MINTOS-Medical Imaging and Navigation in Trauma and Orthopaedic Surgery, BG Trauma Center Ludwigshafen at Heidelberg University Hospital, Ludwig-Guttmann-Str. 13, 67071, Ludwigshafen, Germany
| | - Paul A Grützner
- MINTOS-Medical Imaging and Navigation in Trauma and Orthopaedic Surgery, BG Trauma Center Ludwigshafen at Heidelberg University Hospital, Ludwig-Guttmann-Str. 13, 67071, Ludwigshafen, Germany
| | - Jochen Franke
- MINTOS-Medical Imaging and Navigation in Trauma and Orthopaedic Surgery, BG Trauma Center Ludwigshafen at Heidelberg University Hospital, Ludwig-Guttmann-Str. 13, 67071, Ludwigshafen, Germany
| | | | - Sven Y Vetter
- MINTOS-Medical Imaging and Navigation in Trauma and Orthopaedic Surgery, BG Trauma Center Ludwigshafen at Heidelberg University Hospital, Ludwig-Guttmann-Str. 13, 67071, Ludwigshafen, Germany.
| |
Collapse
|
24
|
Ding AS, Lu A, Li Z, Galaiya D, Ishii M, Siewerdsen JH, Taylor RH, Creighton FX. Automated Extraction of Anatomical Measurements From Temporal Bone CT Imaging. Otolaryngol Head Neck Surg 2022; 167:731-738. [PMID: 35133916 PMCID: PMC9357851 DOI: 10.1177/01945998221076801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 01/10/2022] [Indexed: 11/17/2022]
Abstract
OBJECTIVE Proposed methods of minimally invasive and robot-assisted procedures within the temporal bone require measurements of surgically relevant distances and angles, which often require time-consuming manual segmentation of preoperative imaging. This study aims to describe an automatic segmentation and measurement extraction pipeline of temporal bone cone-beam computed tomography (CT) scans. STUDY DESIGN Descriptive study of temporal bone measurements. SETTING Academic institution. METHODS A propagation template composed of 16 temporal bone CT scans was formed with relevant anatomical structures and landmarks manually segmented. Next, 52 temporal bone CT scans were autonomously segmented using deformable registration techniques from the Advanced Normalization Tools Python package. Anatomical measurements were extracted via in-house Python algorithms. Extracted measurements were compared to ground truth values from manual segmentations. RESULTS Paired t test analyses showed no statistical difference between measurements using this pipeline and ground truth measurements from manually segmented images. Mean (SD) malleus manubrium length was 4.39 (0.34) mm. Mean (SD) incus short and long processes were 2.91 (0.18) mm and 3.53 (0.38) mm, respectively. The mean (SD) maximal diameter of the incus long process was 0.74 (0.17) mm. The first and second facial nerve genus had mean (SD) angles of 68.6 (6.7) degrees and 111.1 (5.3) degrees, respectively. The facial recess had a mean (SD) span of 3.21 (0.46) mm. Mean (SD) minimum distance between the external auditory canal and tegmen was 3.79 (1.05) mm. CONCLUSIONS This is the first study to automatically extract relevant temporal bone anatomical measurements from CT scans using segmentation propagation. Measurements from these models can streamline preoperative planning, improve future segmentation techniques, and help develop future image-guided or robot-assisted systems for temporal bone procedures.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Masaru Ishii
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
25
|
Hatamikia S, Biguri A, Herl G, Kronreif G, Reynolds T, Kettenbach J, Russ T, Tersol A, Maier A, Figl M, Siewerdsen JH, Birkfellner W. Source-detector trajectory optimization in cone-beam computed tomography: a comprehensive review on today’s state-of-the-art. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Accepted: 07/29/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Cone-beam computed tomography (CBCT) imaging is becoming increasingly important for a wide range of applications such as image-guided surgery, image-guided radiation therapy as well as diagnostic imaging such as breast and orthopaedic imaging. The potential benefits of non-circular source-detector trajectories was recognized in early work to improve the completeness of CBCT sampling and extend the field of view (FOV). Another important feature of interventional imaging is that prior knowledge of patient anatomy such as a preoperative CBCT or prior CT is commonly available. This provides the opportunity to integrate such prior information into the image acquisition process by customized CBCT source-detector trajectories. Such customized trajectories can be designed in order to optimize task-specific imaging performance, providing intervention or patient-specific imaging settings. The recently developed robotic CBCT C-arms as well as novel multi-source CBCT imaging systems with additional degrees of freedom provide the possibility to largely expand the scanning geometries beyond the conventional circular source-detector trajectory. This recent development has inspired the research community to innovate enhanced image quality by modifying image geometry, as opposed to hardware or algorithms. The recently proposed techniques in this field facilitate image quality improvement, FOV extension, radiation dose reduction, metal artifact reduction as well as 3D imaging under kinematic constraints. Because of the great practical value and the increasing importance of CBCT imaging in image-guided therapy for clinical and preclinical applications as well as in industry, this paper focuses on the review and discussion of the available literature in the CBCT trajectory optimization field. To the best of our knowledge, this paper is the first study that provides an exhaustive literature review regarding customized CBCT algorithms and tries to update the community with the clarification of in-depth information on the current progress and future trends.
Collapse
|
26
|
Liu SZ, Tivnan M, Osgood GM, Siewerdsen JH, Stayman JW, Zbijewski W. Model-based three-material decomposition in dual-energy CT using the volume conservation constraint. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7a8b] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/20/2022] [Indexed: 01/13/2023]
Abstract
Abstract
Objective. We develop a model-based optimization algorithm for ‘one-step’ dual-energy (DE) CT decomposition of three materials directly from projection measurements. Approach. Since the three-material problem is inherently undetermined, we incorporate the volume conservation principle (VCP) as a pair of equality and nonnegativity constraints into the objective function of the recently reported model-based material decomposition (MBMD). An optimization algorithm (constrained MBMD, CMBMD) is derived that utilizes voxel-wise separability to partition the volume into a VCP-constrained region solved using interior-point iterations, and an unconstrained region (air surrounding the object, where VCP is violated) solved with conventional two-material MBMD. Constrained MBMD (CMBMD) is validated in simulations and experiments in application to bone composition measurements in the presence of metal hardware using DE cone-beam CT (CBCT). A kV-switching protocol with non-coinciding low- and high-energy (LE and HE) projections was assumed. CMBMD with decomposed base materials of cortical bone, fat, and metal (titanium, Ti) is compared to MBMD with (i) fat-bone and (ii) fat-Ti bases. Main results. Three-material CMBMD exhibits a substantial reduction in metal artifacts relative to the two-material MBMD implementations. The accuracies of cortical bone volume fraction estimates are markedly improved using CMBMD, with ∼5–10× lower normalized root mean squared error in simulations with anthropomorphic knee phantoms (depending on the complexity of the metal component) and ∼2–2.5× lower in an experimental test-bench study. Significance. In conclusion, we demonstrated one-step three-material decomposition of DE CT using volume conservation as an optimization constraint. The proposed method might be applicable to DE applications such as bone marrow edema imaging (fat-bone-water decomposition) or multi-contrast imaging, especially on CT/CBCT systems that do not provide coinciding LE and HE ray paths required for conventional projection-domain DE decomposition.
Collapse
|
27
|
Abstract
HYPOTHESIS Automated image registration techniques can successfully determine anatomical variation in human temporal bones with statistical shape modeling. BACKGROUND There is a lack of knowledge about inter-patient anatomical variation in the temporal bone. Statistical shape models (SSMs) provide a powerful method for quantifying variation of anatomical structures in medical images but are time-intensive to manually develop. This study presents SSMs of temporal bone anatomy using automated image-registration techniques. METHODS Fifty-three cone-beam temporal bone CTs were included for SSM generation. The malleus, incus, stapes, bony labyrinth, and facial nerve were automatically segmented using 3D Slicer and a template-based segmentation propagation technique. Segmentations were then used to construct SSMs using MATLAB. The first three principal components of each SSM were analyzed to describe shape variation. RESULTS Principal component analysis of middle and inner ear structures revealed novel modes of anatomical variation. The first three principal components for the malleus represented variability in manubrium length (mean: 4.47 mm; ±2-SDs: 4.03-5.03 mm) and rotation about its long axis (±2-SDs: -1.6° to 1.8° posteriorly). The facial nerve exhibits variability in first and second genu angles. The bony labyrinth varies in the angle between the posterior and superior canals (mean: 88.9°; ±2-SDs: 83.7°-95.7°) and cochlear orientation (±2-SDs: -4.0° to 3.0° anterolaterally). CONCLUSIONS SSMs of temporal bone anatomy can inform surgeons on clinically relevant inter-patient variability. Anatomical variation elucidated by these models can provide novel insight into function and pathophysiology. These models also allow further investigation of anatomical variation based on age, BMI, sex, and geographical location.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Alexander Lu
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Deepa Galaiya
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Masaru Ishii
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Francis X. Creighton
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
28
|
Sheth N, Vagdargi P, Sisniega A, Uneri A, Osgood G, Siewerdsen JH. Preclinical evaluation of a prototype freehand drill video guidance system for orthopedic surgery. J Med Imaging (Bellingham) 2022; 9:045004. [PMID: 36046335 PMCID: PMC9411797 DOI: 10.1117/1.jmi.9.4.045004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/09/2022] [Indexed: 08/28/2023] Open
Abstract
Purpose: Internal fixation of pelvic fractures is a challenging task requiring the placement of instrumentation within complex three-dimensional bone corridors, typically guided by fluoroscopy. We report a system for two- and three-dimensional guidance using a drill-mounted video camera and fiducial markers with evaluation in first preclinical studies. Approach: The system uses a camera affixed to a surgical drill and multimodality (optical and radio-opaque) markers for real-time trajectory visualization in fluoroscopy and/or CT. Improvements to a previously reported prototype include hardware components (mount, camera, and fiducials) and software (including a system for detecting marker perturbation) to address practical requirements necessary for translation to clinical studies. Phantom and cadaver experiments were performed to quantify the accuracy of video-fluoroscopy and video-CT registration, the ability to detect marker perturbation, and the conformance in placing guidewires along realistic pelvic trajectories. The performance was evaluated in terms of geometric accuracy and conformance within bone corridors. Results: The studies demonstrated successful guidewire delivery in a cadaver, with a median entry point error of 1.00 mm (1.56 mm IQR) and median angular error of 1.94 deg (1.23 deg IQR). Such accuracy was sufficient to guide K-wire placement through five of the six trajectories investigated with a strong level of conformance within bone corridors. The sixth case demonstrated a cortical breach due to extrema in the registration error. The system was able to detect marker perturbations and alert the user to potential registration issues. Feasible workflows were identified for orthopedic-trauma scenarios involving emergent cases (with no preoperative imaging) or cases with preoperative CT. Conclusions: A prototype system for guidewire placement was developed providing guidance that is potentially compatible with orthopedic-trauma workflow. First preclinical (cadaver) studies demonstrated accurate guidance of K-wire placement in pelvic bone corridors and the ability to automatically detect perturbations that degrade registration accuracy. The preclinical prototype demonstrated performance and utility supporting translation to clinical studies.
Collapse
Affiliation(s)
- Niral Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Alejandro Sisniega
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Gregory Osgood
- Johns Hopkins Medicine, Department of Orthopedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
29
|
Ding AS, Lu A, Li Z, Galaiya D, Siewerdsen JH, Taylor RH, Creighton FX. Automated Registration-Based Temporal Bone Computed Tomography Segmentation for Applications in Neurotologic Surgery. Otolaryngol Head Neck Surg 2022; 167:133-140. [PMID: 34491849 PMCID: PMC10072909 DOI: 10.1177/01945998211044982] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE This study investigates the accuracy of an automated method to rapidly segment relevant temporal bone anatomy from cone beam computed tomography (CT) images. Implementation of this segmentation pipeline has potential to improve surgical safety and decrease operative time by augmenting preoperative planning and interfacing with image-guided robotic surgical systems. STUDY DESIGN Descriptive study of predicted segmentations. SETTING Academic institution. METHODS We have developed a computational pipeline based on the symmetric normalization registration method that predicts segmentations of anatomic structures in temporal bone CT scans using a labeled atlas. To evaluate accuracy, we created a data set by manually labeling relevant anatomic structures (eg, ossicles, labyrinth, facial nerve, external auditory canal, dura) for 16 deidentified high-resolution cone beam temporal bone CT images. Automated segmentations from this pipeline were compared against ground-truth manual segmentations by using modified Hausdorff distances and Dice scores. Runtimes were documented to determine the computational requirements of this method. RESULTS Modified Hausdorff distances and Dice scores between predicted and ground-truth labels were as follows: malleus (0.100 ± 0.054 mm; Dice, 0.827 ± 0.068), incus (0.100 ± 0.033 mm; Dice, 0.837 ± 0.068), stapes (0.157 ± 0.048 mm; Dice, 0.358 ± 0.100), labyrinth (0.169 ± 0.100 mm; Dice, 0.838 ± 0.060), and facial nerve (0.522 ± 0.278 mm; Dice, 0.567 ± 0.130). A quad-core 16GB RAM workstation completed this segmentation pipeline in 10 minutes. CONCLUSIONS We demonstrated submillimeter accuracy for automated segmentation of temporal bone anatomy when compared against hand-segmented ground truth using our template registration pipeline. This method is not dependent on the training data volume that plagues many complex deep learning models. Favorable runtime and low computational requirements underscore this method's translational potential.
Collapse
Affiliation(s)
- Andy S Ding
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X Creighton
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
30
|
Zhang X, Uneri A, Huang Y, Jones CK, Witham TF, Helm PA, Siewerdsen JH. Deformable 3D-2D image registration and analysis of global spinal alignment in long-length intraoperative spine imaging. Med Phys 2022; 49:5715-5727. [PMID: 35762028 DOI: 10.1002/mp.15819] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 06/03/2022] [Accepted: 06/13/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Spinal deformation during surgical intervention (caused by patient positioning and/or correction of malalignment) confounds conventional navigation due to assumptions of rigid transformation. Moreover, the ability to accurately quantify spinal alignment in the operating room would provide assessment of the surgical product via metrics that correlate with clinical outcome. PURPOSE A method for deformable 3D-2D registration of preoperative CT to intraoperative long-length tomosynthesis images is reported for accurate 3D evaluation of device placement in the presence of spinal deformation and automated evaluation of global spinal alignment (GSA). METHODS Long-length tomosynthesis ("Long Film", LF) images were acquired using an O-arm™ imaging system (Medtronic, Minneapolis USA). A deformable 3D-2D patient registration was developed using multi-scale masking (proceeding from the full-length image to local subvolumes about each vertebra) to transform vertebral labels and planning information from preoperative CT to the LF images. Automatic measurement of GSA [Main Thoracic Kyphosis (MThK) and Lumbar Lordosis (LL)] was obtained using a spline fit to registered labels. The "Known-Component Registration" (KC-Reg) method for device registration was adapted to the multi-scale process for 3D device localization from orthogonal LF images. The multi-scale framework was evaluated using a deformable spine phantom in which pedicle screws were inserted, and deformations were induced over a range in LL ∼25-80°. Further validation was carried out in a cadaver study with implanted pedicle screws and a similar range of spinal deformation. The accuracy of patient and device registration was evaluated in terms of 3D translational error and target registration error (TRE), respectively, and the accuracy of automatic GSA measurements were compared to manual annotation. RESULTS Phantom studies demonstrated accurate registration via the multi-scale framework for all vertebral levels in both the neutral and deformed spine: median (interquartile range, IQR) patient registration error was 1.1 mm (0.7-1.9 mm IQR). Automatic measures of MThK and LL agreed with manual delineation within -1.1° ± 2.2° and 0.7° ± 2.0° (mean and standard deviation), respectively. Device registration error was 0.7 mm (0.4-1.0 mm IQR) at the screw tip and 0.9° (1.0°-1.5°) about the screw trajectory. Deformable 3D-2D registration significantly outperformed conventional rigid registration (p < 0.05), which exhibited device registration error of 2.1 mm (0.8-4.1 mm) and 4.1° (1.2°-9.5°). Cadaver studies verified performance under realistic conditions, demonstrating patient registration error of 1.6 mm (0.9-2.1 mm); MThK within -4.2° ± 6.8° and LL within 1.7° ± 3.5°; and device registration error of 0.8 mm (0.5-1.9 mm) and 0.7° (0.4°-1.2°) for the multi-scale deformable method, compared to 2.5 mm (1.0-7.9 mm) and 2.3° (1.6°-8.1°) for rigid registration (p < 0.05). CONCLUSION The deformable 3D-2D registration framework leverages long-length intraoperative imaging to achieve accurate patient and device registration over extended lengths of the spine (up to 64 cm) even with strong anatomical deformation. The method offers a new means for quantitative validation of spinal correction (intraoperative GSA measurement) and 3D verification of device placement in comparison to preoperative images and planning data. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - Craig K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD
| | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD
| | | | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD.,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD.,Department of Neurosurgery, Johns Hopkins University, Baltimore, MD
| |
Collapse
|
31
|
Huang H, Siewerdsen JH, Zbijewski W, Weiss CR, Unberath M, Ehtiati T, Sisniega A. Reference-free learning-based similarity metric for motion compensation in cone-beam CT. Phys Med Biol 2022; 67. [PMID: 35636391 DOI: 10.1088/1361-6560/ac749a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/30/2022] [Indexed: 11/12/2022]
Abstract
Purpose. Patient motion artifacts present a prevalent challenge to image quality in interventional cone-beam CT (CBCT). We propose a novel reference-free similarity metric (DL-VIF) that leverages the capability of deep convolutional neural networks (CNN) to learn features associated with motion artifacts within realistic anatomical features. DL-VIF aims to address shortcomings of conventional metrics of motion-induced image quality degradation that favor characteristics associated with motion-free images, such as sharpness or piecewise constancy, but lack any awareness of the underlying anatomy, potentially promoting images depicting unrealistic image content. DL-VIF was integrated in an autofocus motion compensation framework to test its performance for motion estimation in interventional CBCT.Methods. DL-VIF is a reference-free surrogate for the previously reported visual image fidelity (VIF) metric, computed against a motion-free reference, generated using a CNN trained using simulated motion-corrupted and motion-free CBCT data. Relatively shallow (2-ResBlock) and deep (3-Resblock) CNN architectures were trained and tested to assess sensitivity to motion artifacts and generalizability to unseen anatomy and motion patterns. DL-VIF was integrated into an autofocus framework for rigid motion compensation in head/brain CBCT and assessed in simulation and cadaver studies in comparison to a conventional gradient entropy metric.Results. The 2-ResBlock architecture better reflected motion severity and extrapolated to unseen data, whereas 3-ResBlock was found more susceptible to overfitting, limiting its generalizability to unseen scenarios. DL-VIF outperformed gradient entropy in simulation studies yielding average multi-resolution structural similarity index (SSIM) improvement over uncompensated image of 0.068 and 0.034, respectively, referenced to motion-free images. DL-VIF was also more robust in motion compensation, evidenced by reduced variance in SSIM for various motion patterns (σDL-VIF = 0.008 versusσgradient entropy = 0.019). Similarly, in cadaver studies, DL-VIF demonstrated superior motion compensation compared to gradient entropy (an average SSIM improvement of 0.043 (5%) versus little improvement and even degradation in SSIM, respectively) and visually improved image quality even in severely motion-corrupted images.Conclusion: The studies demonstrated the feasibility of building reference-free similarity metrics for quantification of motion-induced image quality degradation and distortion of anatomical structures in CBCT. DL-VIF provides a reliable surrogate for motion severity, penalizes unrealistic distortions, and presents a valuable new objective function for autofocus motion compensation in CBCT.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America.,Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - T Ehtiati
- Siemens Medical Solutions USA, Inc., Imaging & Therapy Systems, Hoffman Estates, IL, United States of America
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
32
|
Han R, Jones CK, Lee J, Zhang X, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Joint synthesis and registration network for deformable MR-CBCT image registration for neurosurgical guidance. Phys Med Biol 2022; 67:10.1088/1361-6560/ac72ef. [PMID: 35609586 PMCID: PMC9801422 DOI: 10.1088/1361-6560/ac72ef] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/24/2022] [Indexed: 01/03/2023]
Abstract
Objective.The accuracy of navigation in minimally invasive neurosurgery is often challenged by deep brain deformations (up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach). We propose a deep learning-based deformable registration method to address such deformations between preoperative MR and intraoperative CBCT.Approach.The registration method uses a joint image synthesis and registration network (denoted JSR) to simultaneously synthesize MR and CBCT images to the CT domain and perform CT domain registration using a multi-resolution pyramid. JSR was first trained using a simulated dataset (simulated CBCT and simulated deformations) and then refined on real clinical images via transfer learning. The performance of the multi-resolution JSR was compared to a single-resolution architecture as well as a series of alternative registration methods (symmetric normalization (SyN), VoxelMorph, and image synthesis-based registration methods).Main results.JSR achieved median Dice coefficient (DSC) of 0.69 in deep brain structures and median target registration error (TRE) of 1.94 mm in the simulation dataset, with improvement from single-resolution architecture (median DSC = 0.68 and median TRE = 2.14 mm). Additionally, JSR achieved superior registration compared to alternative methods-e.g. SyN (median DSC = 0.54, median TRE = 2.77 mm), VoxelMorph (median DSC = 0.52, median TRE = 2.66 mm) and provided registration runtime of less than 3 s. Similarly in the clinical dataset, JSR achieved median DSC = 0.72 and median TRE = 2.05 mm.Significance.The multi-resolution JSR network resolved deep brain deformations between MR and CBCT images with performance superior to other state-of-the-art methods. The accuracy and runtime support translation of the method to further clinical studies in high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America
| | - J Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States of America
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America,Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America,Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| |
Collapse
|
33
|
Hu Y, Huang H, Siewerdsen JH, Zbijewski W, Unberath M, Weiss CR, Sisniega A. Simulation of Random Deformable Motion in Soft-Tissue Cone-Beam CT with Learned Models. Proc SPIE Int Soc Opt Eng 2022; 12304:1230413. [PMID: 36381251 PMCID: PMC9654724 DOI: 10.1117/12.2646720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Cone-beam CT (CBCT) is widely used for guidance in interventional radiology but it is susceptible to motion artifacts. Motion in interventional CBCT features a complex combination of diverse sources including quasi-periodic, consistent motion patterns such as respiratory motion, and aperiodic, quasi-random, motion such as peristalsis. Recent developments in image-based motion compensation methods include approaches that combine autofocus techniques with deep learning models for extraction of image features pertinent to CBCT motion. Training of such deep autofocus models requires the generation of large amounts of realistic, motion-corrupted CBCT. Previous works on motion simulation were mostly focused on quasi-periodic motion patterns, and reliable simulation of complex combined motion with quasi-random components remains an unaddressed challenge. This work presents a framework aimed at synthesis of realistic motion trajectories for simulation of deformable motion in soft-tissue CBCT. The approach leveraged the capability of conditional generative adversarial network (GAN) models to learn the complex underlying motion present in unlabeled, motion-corrupted, CBCT volumes. The approach is designed for training with unpaired clinical CBCT in an unsupervised fashion. This work presents a first feasibility study, in which the model was trained with simulated data featuring known motion, providing a controlled scenario for validation of the proposed approach prior to extension to clinical data. Our proof-of-concept study illustrated the potential of the model to generate realistic, variable simulation of CBCT deformable motion fields, consistent with three trends underlying the designed training data: i) the synthetic motion induced only diffeomorphic deformations - with Jacobian Determinant larger than zero; ii) the synthetic motion showed median displacement of 0. 5 mm in regions predominantly static in the training (e.g., the posterior aspect of the patient laying supine), compared to a median displacement of 3.8 mm in regions more prone to motion in the training; and iii) the synthetic motion exhibited predominant directionality consistent with the training set, resulting in larger motion in the superior-inferior direction (median and maximum amplitude of 4.58 mm and 20 mm, > 2x larger than the two remaining direction). Together, the proposed framework shows the feasibility for realistic motion simulation and synthesis of variable CBCT data.
Collapse
Affiliation(s)
- Y Hu
- Dept. of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - H Huang
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - J H Siewerdsen
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - W Zbijewski
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M Unberath
- Dept. of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - C R Weiss
- Russel H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, USA
| | - A Sisniega
- Dept. of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
34
|
Liu SZ, Herbst M, Weber T, Vogt S, Ritschl L, Kappler S, Siewerdsen JH, Zbijewski W. Dual-Energy Cone-Beam CT with Three-Material Decomposition for Bone Marrow Edema Imaging. Proc SPIE Int Soc Opt Eng 2022; 12304:123040Z. [PMID: 38223466 PMCID: PMC10788133 DOI: 10.1117/12.2646391] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
We investigate the feasibility of bone marrow edema (BME) detection using a kV-switching Dual-Energy (DE) Cone-Beam CT (CBCT) protocol. This task is challenging due to unmatched x-ray paths in the low-energy (LE) and high-energy (HE) spectral channels, CBCT non-idealities such as x-ray scatter, and narrow spectral separation between fat (bone marrow) and water (BME). We propose a comprehensive DE decomposition framework consisting of projection interpolation onto matching LE and HE view angles, fast Monte Carlo scatter correction with low number of tracked photons and Gaussian denoising, and two-stage three-material decompositions involving two-material (fat-Aluminium) Projection-Domain Decomposition (PDD) followed by image-domain three-material (fat-water-bone) base-change. Performance in BME detection was evaluated in simulations and experiments emulating a kV-switching CBCT wrist imaging protocol on a robotic x-ray system with 60 kV LE beam, 120 kV HE beam, and 0.5° angular shift between the LE and HE views. Cubic B-spline interpolation was found to be adequate to resample HE and LE projections of a wrist onto common view angles required by PDD. The DE decomposition maintained acceptable BME detection specificity (<0.2 mL erroneously detected BME volume compared to 0.85 mL true BME volume) over +/-10% range of scatter magnitude errors, as long as the scatter shape was estimated without major distortions. Physical test bench experiments demonstrated successful discrimination of ~20% change in fat concentrations in trabecular bone-mimicking solutions of varying water and fat content.
Collapse
Affiliation(s)
- Stephen Z. Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | | | | | | | | | | | | | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| |
Collapse
|
35
|
Huang H, Siewerdsen JH, Zbijewski W, Weiss CR, Unberath M, Sisniega A. Context-Aware, Reference-Free Local Motion Metric for CBCT Deformable Motion Compensation. Proc SPIE Int Soc Opt Eng 2022; 12304:1230412. [PMID: 36381250 PMCID: PMC9665334 DOI: 10.1117/12.2646857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deformable motion is one of the main challenges to image quality in interventional cone beam CT (CBCT). Autofocus methods have been successfully applied for deformable motion compensation in CBCT, using multi-region joint optimization approaches that leverage the moderately smooth spatial variation motion of the deformable motion field with a local neighborhood. However, conventional autofocus metrics enforce images featuring sharp image-appearance, but do not guarantee the preservation of anatomical structures. Our previous work (DL-VIF) showed that deep convolutional neural networks (CNNs) can reproduce metrics of structural similarity (visual information fidelity - VIF), removing the need for a matched motion-free reference, and providing quantification of motion degradation and structural integrity. Application of DL-VIF within local neighborhoods is challenged by the large variability of local image content across a CBCT volume, and requires global context information for successful evaluation of motion effects. In this work, we propose a novel deep autofocus metric, based on a context-aware, multi-resolution, deep CNN design. In addition to the inclusion of contextual information, the resulting metric generates a voxel-wise distribution of reference-free VIF values. The new metric, denoted CADL-VIF, was trained on simulated CBCT abdomen scans with deformable motion at random locations and with amplitude up to 30 mm. The CADL-VIF achieved good correlation with the ground truth VIF map across all test cases with R2 = 0.843 and slope = 0.941. When integrated into a multi-ROI deformable motion compensation method, CADL-VIF consistently reduced motion artifacts, yielding an average increase in SSIM of 0.129 in regions with severe motion and 0.113 in regions with mild motion. This work demonstrated the capability of CADL-VIF to recognize anatomical structures and penalize unrealistic images, which is a key step in developing reliable autofocus for complex deformable motion compensation in CBCT.
Collapse
Affiliation(s)
- H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
- Department of Radiology, Johns Hopkins University, Baltimore, MD
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - C R Weiss
- Department of Radiology, Johns Hopkins University, Baltimore, MD
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD
| | - A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| |
Collapse
|
36
|
Siewerdsen JH, Linte CA. SPIE Medical Imaging 50th anniversary: historical review of the Image-Guided Procedures, Robotic Interventions, and Modeling conference. J Med Imaging (Bellingham) 2022; 9:012206. [PMID: 36225968 PMCID: PMC9535146 DOI: 10.1117/1.jmi.9.s1.012206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/22/2022] [Indexed: 11/20/2022] Open
Abstract
Purpose Among the conferences comprising the Medical Imaging Symposium is the MI104 conference currently titled Image-Guided Procedures, Robotic Interventions, and Modeling, although its name has evolved through at least nine iterations over the last 30 years. Here, we discuss the important role that this forum has presented for researchers in the field during this time. Approach The origins of the conference are traced from its roots in Image Capture and Display in the late 1980s, and some of the major themes for which the conference and its proceedings have provided a valuable forum are highlighted. Results These major themes include image display/visualization, surgical tracking/navigation, surgical robotics, interventional imaging, image registration, and modeling. Exceptional work from the conference is highlighted by summarizing keynote lectures, the top 50 most downloaded proceedings papers over the last 30 years, the most downloaded paper each year, and the papers earning student paper and young scientist awards. Conclusions Looking forward and considering the burgeoning technologies, algorithms, and markets related to image-guided and robot-assisted interventions, we anticipate growth and ever increasing quality of the conference as well as increased interaction with sister conferences within the symposium.
Collapse
|
37
|
Sheth NM, Uneri A, Helm PA, Zbijewski W, Siewerdsen JH. Technical assessment of 2D and 3D imaging performance of an IGZO-based flat-panel X-ray detector. Med Phys 2022; 49:3053-3066. [PMID: 35363391 PMCID: PMC10153656 DOI: 10.1002/mp.15605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 03/09/2022] [Accepted: 03/09/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Indirect detection flat-panel detectors (FPDs) consisting of hydrogenated amorphous silicon (a-Si:H) thin-film transistors (TFTs) are a prevalent technology for digital x-ray imaging. However, their performance is challenged in applications requiring low exposure levels, high spatial resolution, and high frame rate. Emerging FPD designs using metal oxide TFTs may offer potential performance improvements compared to FPDs based on a-Si:H TFTs. PURPOSE This work investigates the imaging performance of a new indium gallium zinc oxide (IGZO) TFT-based detector in 2D fluoroscopy and 3D cone-beam CT (CBCT). METHODS The new FPD consists of a sensor array combining IGZO TFTs with a-Si:H photodiodes and a 0.7-mm thick CsI:Tl scintillator. The FPD was implemented on an x-ray imaging bench with system geometry emulating intraoperative CBCT. A conventional FPD with a-Si:H TFTs and a 0.6-mm thick CsI:Tl scintillator was similarly implemented as a basis of comparison. 2D imaging performance was characterized in terms of electronic noise, sensitivity, linearity, lag, spatial resolution (modulation transfer function, MTF), image noise (noise-power spectrum, NPS), and detective quantum efficiency (DQE) with entrance air kerma (EAK) ranging from 0.3 to 1.2 μGy. 3D imaging performance was evaluated in terms of the 3D MTF and noise-equivalent quanta (NEQ), soft-tissue contrast-to-noise ratio (CNR), and image quality evident in anthropomorphic phantoms for a range of anatomical sites and dose, with weighted air kerma, K w ${K_w}$ , ranging from 0.8 to 4.9 mGy. RESULTS The 2D imaging performance of the IGZO-based FPD exhibited up to ∼1.7× lower electronic noise than the a-Si:H FPD at matched pixel pitch. Furthermore, the IGZO FPD exhibited ∼27% increase in mid-frequency DQE (1 mm-1 ) at matched pixel size and dose (EAK ≈ 1.0 μGy) and ∼11% increase after adjusting for differences in scintillator thickness. 2D spatial resolution was limited by the scintillator for each FPD. The IGZO-based FPD demonstrated improved 3D NEQ at all spatial frequencies in both head (≥25% increase for all dose levels) and body (≥10% increase for K w ${K_w}$ ≤2 mGy) imaging scenarios. These characteristics translated to improved low-contrast visualization in anthropomorphic phantoms, demonstrating ≥10% improvement in CNR and extension of the low-dose range for which the detector is input-quantum limited. CONCLUSION The IGZO-based FPD demonstrated improvements in electronic noise, image lag, and NEQ that translated to measurable improvements in 2D and 3D imaging performance compared to a conventional FPD based on a-Si:H TFTs. The improvements are most beneficial for 2D or 3D imaging scenarios involving low-dose and/or high-frame rate.
Collapse
Affiliation(s)
- Niral Milan Sheth
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
38
|
Wang A, Cunningham I, Danielsson M, Fahrig R, Flohr T, Hoeschen C, Noo F, Sabol JM, Siewerdsen JH, Tingberg A, Yorkston J, Zhao W, Samei E. Science and practice of imaging physics through 50 years of SPIE Medical Imaging conferences. J Med Imaging (Bellingham) 2022; 9:012205. [PMID: 35309720 DOI: 10.1117/1.jmi.9.s1.012205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/01/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: For 50 years now, SPIE Medical Imaging (MI) conferences have been the premier forum for disseminating and sharing new ideas, technologies, and concepts on the physics of MI. Approach: Our overarching objective is to demonstrate and highlight the major trajectories of imaging physics and how they are informed by the community and science present and presented at SPIE MI conferences from its inception to now. Results: These contributions range from the development of image science, image quality metrology, and image reconstruction to digital x-ray detectors that have revolutionized MI modalities including radiography, mammography, fluoroscopy, tomosynthesis, and computed tomography (CT). Recent advances in detector technology such as photon-counting detectors continue to enable new capabilities in MI. Conclusion: As we celebrate the past 50 years, we are also excited about what the next 50 years of SPIE MI will bring to the physics of MI.
Collapse
Affiliation(s)
- Adam Wang
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Ian Cunningham
- Western University, Robarts Research Institute, London, Ontario, Canada
| | - Mats Danielsson
- KTH Royal Institute of Technology, Department of Physics, Stockholm, Sweden
| | - Rebecca Fahrig
- Siemens Healthineers, Forchheim, Germany.,Friedrich-Alexander Universität, Department of Computer Science, Erlangen, Germany
| | | | - Christoph Hoeschen
- Otto-von-Guericke University, Institute of Medical Engineering, Magdeburg, Germany
| | - Frederic Noo
- University of Utah, Department of Radiology and Imaging Sciences, Salt Lake City, Utah, United States
| | - John M Sabol
- Konica Minolta Healthcare Americas, Wayne, New Jersey, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Anders Tingberg
- Lund University, Skåne University Hospital, Department of Translational Medicine, Medical Radiation Physics, Malmö, Sweden
| | - John Yorkston
- Carestream Health, Rochester, New York, United States
| | - Wei Zhao
- Stony Brook University, Department of Radiology, Stony Brook, New York, United States
| | - Ehsan Samei
- Duke University, Department of Radiology, Durham, North Carolina, United States
| |
Collapse
|
39
|
Liu SZ, Zhao C, Herbst M, Weber T, Vogt S, Ritschl L, Kappler S, Siewerdsen JH, Zbijewski W. Feasibility of Dual-Energy Cone-Beam CT of Bone Marrow Edema Using Dual-Layer Flat Panel Detectors. Proc SPIE Int Soc Opt Eng 2022; 12031:120311J. [PMID: 38223908 PMCID: PMC10788135 DOI: 10.1117/12.2613211] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
Purpose We investigated the feasibility of detection and quantification of bone marrow edema (BME) using dual-energy (DE) Cone-Beam CT (CBCT) with a dual-layer flat panel detector (FPD) and three-material decomposition. Methods A realistic CBCT system simulator was applied to study the impact of detector quantization, scatter, and spectral calibration errors on the accuracy of fat-water-bone decompositions of dual-layer projections. The CBCT system featured 975 mm source-axis distance, 1,362 mm source-detector distance and a 430 × 430 mm2 dual-layer FPD (top layer: 0.20 mm CsI:Tl, bottom layer: 0.55 mm CsI:Tl; a 1 mm Cu filter between the layers to improve spectral separation). Tube settings were 120 kV (+2 mm Al, +0.2 mm Cu) and 10 mAs per exposure. The digital phantom consisted of a 160 mm water cylinder with inserts containing mixtures of water (volume fraction ranging 0.18 to 0.46) - fat (0.5 to 0.7) - Ca (0.04 to 0.12); decreasing fractions of fat indicated increasing degrees of BME. A two-stage three-material DE decomposition was applied to DE CBCT projections: first, projection-domain decomposition (PDD) into fat-aluminum basis, followed by CBCT reconstruction of intermediate base images, followed by image-domain change of basis into fat, water and bone. Sensitivity to scatter was evaluated by i) adjusting source collimation (12 to 400 mm width) and ii) subtracting various fractions of the true scatter from the projections at 400 mm collimation. The impact of spectral calibration was studied by shifting the effective beam energy (± 2 keV) when creating the PDD lookup table. We further simulated a realistic BME imaging framework, where the scatter was estimated using a fast Monte Carlo (MC) simulation from a preliminary decomposition of the object; the object was a realistic wrist phantom with an 0.85 mL BME stimulus in the radius. Results The decomposition is sensitive to scatter: approx. <20 mm collimation width or <10% error of scatter correction in a full field-of-view setting is needed to resolve BME. A mismatch in PDD decomposition calibration of ± 1 keV results in ~25% error in fat fraction estimates. In the wrist phantom study with MC scatter corrections, we were able to achieve ~0.79 mL true positive and ~0.06 mL false positive BME detection (compared to 0.85 mL true BME volume). Conclusions Detection of BME using DE CBCT with dual-layer FPD is feasible, but requires scatter mitigation, accurate scatter estimation, and robust spectral calibration.
Collapse
Affiliation(s)
- Stephen Z. Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | - Chumin Zhao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| | | | | | | | | | | | | | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205
| |
Collapse
|
40
|
Vagdargi P, Uneri A, Jones CK, Wu P, Han R, Luciano MG, Anderson WS, Helm PA, Hager GD, Siewerdsen JH. Pre-Clinical Development of Robot-Assisted Ventriculoscopy for 3D Image Reconstruction and Guidance of Deep Brain Neurosurgery. IEEE Trans Med Robot Bionics 2022; 4:28-37. [PMID: 35368731 PMCID: PMC8967072 DOI: 10.1109/tmrb.2021.3125322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Conventional neuro-navigation can be challenged in targeting deep brain structures via transventricular neuroendoscopy due to unresolved geometric error following soft-tissue deformation. Current robot-assisted endoscopy techniques are fairly limited, primarily serving to planned trajectories and provide a stable scope holder. We report the implementation of a robot-assisted ventriculoscopy (RAV) system for 3D reconstruction, registration, and augmentation of the neuroendoscopic scene with intraoperative imaging, enabling guidance even in the presence of tissue deformation and providing visualization of structures beyond the endoscopic field-of-view. Phantom studies were performed to quantitatively evaluate image sampling requirements, registration accuracy, and computational runtime for two reconstruction methods and a variety of clinically relevant ventriculoscope trajectories. A median target registration error of 1.2 mm was achieved with an update rate of 2.34 frames per second, validating the RAV concept and motivating translation to future clinical studies.
Collapse
Affiliation(s)
- Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Craig K. Jones
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Mark G. Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, USA
| | | | | | - Gregory D. Hager
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering and Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
41
|
Sisniega A, Lu A, Huang H, Zbijewski W, Unberath M, Siewerdsen JH, Weiss CR. Targeted Deformable Motion Compensation for Vascular Interventional Cone-Beam CT Imaging. Proc SPIE Int Soc Opt Eng 2022; 12031:120311H. [PMID: 36381563 PMCID: PMC9654751 DOI: 10.1117/12.2613232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Purpose Cone-beam CT has become commonplace for 3D guidance in interventional radiology (IR), especially for vascular procedures in which identification of small vascular structures is crucial. However, its long image acquisition time poses a limit to image quality due to soft-tissue deformable motion that hampers visibility of small vessels. Autofocus motion compensation has shown promising potential for soft-tissue deformable motion compensation, but it lacks specific target to the imaging task. This work presents an approach for deformable motion compensation targeted at imaging of vascular structures. Methods The proposed method consists on a two-stage framework for: i) identification of contrast-enhanced blood vessels in 2D projection data and delineation of an approximate region covering the vascular target in the volume space, and, ii) a novel autofocus approach including a metric designed to promote the presence of vascular structures acting solely in the region of interest. The vesselness of the image is quantified via evaluation of the properties of the 3D image Hessian, yielding a vesselness filter that gives larger values to voxels candidate to be part of a tubular structure. A cost metric is designed to promote large vesselness values and spatial sparsity, as expected in regions of fine vascularity. A targeted autofocus method was designed by combining the presented metric with a conventional autofocus term acting outside of the region of interest. The resulting method was evaluated on simulated data including synthetic vascularity merged with real anatomical features obtained from MDCT data. Further evaluation was obtained in two clinical datasets obtained during TACE procedures with a robotic C-arm (Artis Zeego, Siemens Healthineers). Results The targeted vascular autofocus effectively restored the shape and contrast of the contrast-enhanced vascularity in the simulation cases, resulting in improved visibility and reduced artifacts. Segmentations performed with a single threshold value on the target vascular regions yielded a net increase of up to 42% in DICE coefficient computed against the static reference. Motion compensation in clinical datasets resulted in improved visibility of vascular structures, observed in maximum intensity projections of the contrast-enhanced liver vessel tree. Conclusion Targeted motion compensation for vascular imaging showed promising performance for increased identification of small vascular structures in presence of motion. The development of autofocus metrics and methods tailored to vascular imaging opens the way for reliable compensation of deformable motion while preserving the integrity of anatomical structures in the image.
Collapse
Affiliation(s)
- A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - A Lu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - H Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - W Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
| | - M Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD USA
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| | - C R Weiss
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
42
|
Nishikawa RM, Deserno TM, Madabhushi A, Krupinski EA, Summers RM, Hoeschen C, Mello-Thoms C, Myers KJ, Kupinski MA, Siewerdsen JH. Fifty years of SPIE Medical Imaging proceedings papers. J Med Imaging (Bellingham) 2022; 9:012207. [PMID: 35761820 DOI: 10.1117/1.jmi.9.s1.012207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 04/12/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: To commemorate the 50th anniversary of the first SPIE Medical Imaging meeting, we highlight some of the important publications published in the conference proceedings. Approach: We determined the top cited and downloaded papers. We also asked members of the editorial board of the Journal of Medical Imaging to select their favorite papers. Results: There was very little overlap between the three methods of highlighting papers. The downloads were mostly recent papers, whereas the favorite papers were mostly older papers. Conclusions: The three different methods combined provide an overview of the highlights of the papers published in the SPIE Medical Imaging conference proceedings over the last 50 years.
Collapse
Affiliation(s)
- Robert M Nishikawa
- University of Pittsburgh, Department of Radiology, Pittsburgh, Pennsylvania, United States
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, United States
| | - Elizabeth A Krupinski
- Emory University, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Ronald M Summers
- National Institutes of Health, Radiology and Imaging Sciences, Clinical Center, Bethesda, Maryland, United States
| | - Christoph Hoeschen
- Otto-von-Guericke University Magdeburg, Institute for Medical Technology, Magdeburg, Germany
| | | | - Kyle J Myers
- Formerly, U.S. Food and Drug Administration, Silver Spring, Maryland, United States
| | - Mathew A Kupinski
- The University of Arizona, Wyant College of Optical Sciences and Department of Medical Imaging, Tucson, United States
| | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
43
|
Han R, Jones CK, Lee J, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Deformable MR-CT image registration using an unsupervised, dual-channel network for neurosurgical guidance. Med Image Anal 2022; 75:102292. [PMID: 34784539 PMCID: PMC10229200 DOI: 10.1016/j.media.2021.102292] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 10/22/2021] [Accepted: 10/25/2021] [Indexed: 02/08/2023]
Abstract
PURPOSE The accuracy of minimally invasive, intracranial neurosurgery can be challenged by deformation of brain tissue - e.g., up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach. We report an unsupervised, deep learning-based registration framework to resolve such deformations between preoperative MR and intraoperative CT with fast runtime for neurosurgical guidance. METHOD The framework incorporates subnetworks for MR and CT image synthesis with a dual-channel registration subnetwork (with synthesis uncertainty providing spatially varying weights on the dual-channel loss) to estimate a diffeomorphic deformation field from both the MR and CT channels. An end-to-end training is proposed that jointly optimizes both the synthesis and registration subnetworks. The proposed framework was investigated using three datasets: (1) paired MR/CT with simulated deformations; (2) paired MR/CT with real deformations; and (3) a neurosurgery dataset with real deformation. Two state-of-the-art methods (Symmetric Normalization and VoxelMorph) were implemented as a basis of comparison, and variations in the proposed dual-channel network were investigated, including single-channel registration, fusion without uncertainty weighting, and conventional sequential training of the synthesis and registration subnetworks. RESULTS The proposed method achieved: (1) Dice coefficient = 0.82±0.07 and TRE = 1.2 ± 0.6 mm on paired MR/CT with simulated deformations; (2) Dice coefficient = 0.83 ± 0.07 and TRE = 1.4 ± 0.7 mm on paired MR/CT with real deformations; and (3) Dice = 0.79 ± 0.13 and TRE = 1.6 ± 1.0 mm on the neurosurgery dataset with real deformations. The dual-channel registration with uncertainty weighting demonstrated superior performance (e.g., TRE = 1.2 ± 0.6 mm) compared to single-channel registration (TRE = 1.6 ± 1.0 mm, p < 0.05 for CT channel and TRE = 1.3 ± 0.7 mm for MR channel) and dual-channel registration without uncertainty weighting (TRE = 1.4 ± 0.8 mm, p < 0.05). End-to-end training of the synthesis and registration subnetworks also improved performance compared to the conventional sequential training strategy (TRE = 1.3 ± 0.6 mm). Registration runtime with the proposed network was ∼3 s. CONCLUSION The deformable registration framework based on dual-channel MR/CT registration with spatially varying weights and end-to-end training achieved geometric accuracy and runtime that was superior to state-of-the-art baseline methods and various ablations of the proposed network. The accuracy and runtime of the method may be compatible with the requirements of high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States
| | - J Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD, United States
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States; Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States; Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States.
| |
Collapse
|
44
|
Uneri A, Wu P, Jones CK, Vagdargi P, Han R, Helm PA, Luciano MG, Anderson WS, Siewerdsen JH. Deformable 3D-2D registration for high-precision guidance and verification of neuroelectrode placement. Phys Med Biol 2021; 66. [PMID: 34644684 DOI: 10.1088/1361-6560/ac2f89] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate neuroelectrode placement is essential to effective monitoring or stimulation of neurosurgery targets. This work presents and evaluates a method that combines deep learning and model-based deformable 3D-2D registration to guide and verify neuroelectrode placement using intraoperative imaging.Methods.The registration method consists of three stages: (1) detection of neuroelectrodes in a pair of fluoroscopy images using a deep learning approach; (2) determination of correspondence and initial 3D localization among neuroelectrode detections in the two projection images; and (3) deformable 3D-2D registration of neuroelectrodes according to a physical device model. The method was evaluated in phantom, cadaver, and clinical studies in terms of (a) the accuracy of neuroelectrode registration and (b) the quality of metal artifact reduction (MAR) in cone-beam CT (CBCT) in which the deformably registered neuroelectrode models are taken as input to the MAR.Results.The combined deep learning and model-based deformable 3D-2D registration approach achieved 0.2 ± 0.1 mm accuracy in cadaver studies and 0.6 ± 0.3 mm accuracy in clinical studies. The detection network and 3D correspondence provided initialization of 3D-2D registration within 2 mm, which facilitated end-to-end registration runtime within 10 s. Metal artifacts, quantified as the standard deviation in voxel values in tissue adjacent to neuroelectrodes, were reduced by 72% in phantom studies and by 60% in first clinical studies.Conclusions.The method combines the speed and generalizability of deep learning (for initialization) with the precision and reliability of physical model-based registration to achieve accurate deformable 3D-2D registration and MAR in functional neurosurgery. Accurate 3D-2D guidance from fluoroscopy could overcome limitations associated with deformation in conventional navigation, and improved MAR could improve CBCT verification of neuroelectrode placement.
Collapse
Affiliation(s)
- A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - C K Jones
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America
| | - R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | - P A Helm
- Medtronic, Littleton, MA 01460, United States of America
| | - M G Luciano
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America.,Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, United States of America.,Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD 21287, United States of America
| |
Collapse
|
45
|
Zhao C, Herbst M, Weber T, Luckner C, Vogt S, Ritschl L, Kappler S, Siewerdsen JH, Zbijewski W. Slot-scan dual-energy bone densitometry using motorized X-ray systems. Med Phys 2021; 48:6673-6695. [PMID: 34628651 DOI: 10.1002/mp.15272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 08/31/2021] [Accepted: 09/24/2021] [Indexed: 11/12/2022] Open
Abstract
PURPOSE We investigate the feasibility of slot-scan dual-energy (DE) bone densitometry on motorized radiographic equipment. This approach will enable fast quantitative measurements of areal bone mineral density (aBMD) for opportunistic evaluation of osteoporosis. METHODS We investigated DE slot-scan protocols to obtain aBMD measurements at the lumbar spine (L-spine) and hip using a motorized x-ray platform capable of synchronized translation of the x-ray source and flat-panel detector (FPD). The slot dimension was 5 × 20 cm2 . The DE slot views were processed as follows: (1) convolution kernel-based scatter correction, (2) unfiltered backprojection to tile the slots into long-length radiographs, and (3) projection-domain DE decomposition, consisting of an initial adipose-water decomposition in a bone-free region followed by water-CaHA decomposition with adjustment for adipose content. The accuracy and reproducibility of slot-scan aBMD measurements were investigated using a high-fidelity simulator of a robotic x-ray system (Siemens Multitom Rax) in a total of 48 body phantom realizations: four average bone density settings (cortical bone mass fraction: 10-40%), four body sizes (waist circumference, WC = 70-106 cm), and three lateral shifts of the body within the slot field of view (FOV) (centered and ±1 cm off-center). Experimental validations included: (1) x-ray test-bench feasibility study of adipose-water decomposition and (2) initial demonstration of slot-scan DE bone densitometry on the robotic x-ray system using the European Spine Phantom (ESP) with added attenuation (polymethyl methacrylate [PMMA] slabs) ranging 2 to 6 cm thick. RESULTS For the L-spine, the mean aBMD error across all WC settings ranged from 0.08 g/cm2 for phantoms with average cortical bone fraction wcortical = 10% to ∼0.01 g/cm2 for phantoms with wcortical = 40%. The L-spine aBMD measurements were fairly robust to changes in body size and positioning, e.g., coefficient of variation (CV) for L1 with wcortical = 30% was ∼0.034 for various WC and ∼0.02 for an obese patient (WC = 106 cm) changing lateral shift. For the hip, the mean aBMD error across all phantom configurations was about 0.07 g/cm2 for a centered patient. The reproducibility of hip aBMD was slightly worse than in the L-spine (e.g., in the femoral neck, the CV with respect to changing WC was ∼0.13 for phantom realizations with wcortical = 30%) due to more challenging scatter estimation in the presence of an air-tissue interface within the slot FOV. The aBMD of the hip was therefore sensitive to lateral positioning of the patient, especially for obese patients: e.g., the CV with respect to patient lateral shift for femoral neck with WC = 106 cm and wcortical = 30% was 0.14. Empirical evaluations confirmed substantial reduction in aBMD errors with the proposed adipose estimation procedure and demonstrated robust aBMD measurements on the robotic x-ray system, with aBMD errors of ∼0.1 g/cm2 across all three simulated ESP vertebrae and all added PMMA attenuator settings. CONCLUSIONS We demonstrated that accurate aBMD measurements can be obtained on a motorized FPD-based x-ray system using DE slot-scans with kernel-based scatter correction, backprojection-based slot view tiling, and DE decomposition with adipose correction.
Collapse
Affiliation(s)
- Chumin Zhao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | | | | | | | | | | | | | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Radiology, Johns Hopkins University, Baltimore, Maryland, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
46
|
Zhang X, Zbijewski W, Huang Y, Uneri A, Jones CK, Lo SFL, Witham TF, Luciano M, Anderson WS, Helm PA, Siewerdsen JH. Intraoperative cone-beam and slot-beam CT: 3D image quality and dose with a slot collimator on the O-arm imaging system. Med Phys 2021; 48:6800-6809. [PMID: 34519364 PMCID: PMC10174643 DOI: 10.1002/mp.15221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/09/2021] [Accepted: 08/31/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To characterize the 3D imaging performance and radiation dose for a prototype slot-beam configuration on an intraoperative O-arm™ Surgical Imaging System (Medtronic Inc., Littleton, MA) and identify potential improvements in soft-tissue image quality for surgical interventions. METHODS A slot collimator was integrated with the O-arm™ system for slot-beam axial CT. The collimator can be automatically actuated to provide 1.2° slot-beam longitudinal collimation. Cone-beam and slot-beam configurations were investigated with and without an antiscatter grid (12:1 grid ratio, 60 lines/cm). Dose, scatter, image noise, and soft-tissue contrast resolution were evaluated in quantitative phantoms for head and body configurations over a range of exposure levels (beam energy and mAs), with reconstruction performed via filtered-backprojection. Qualitative imaging performance across various anatomical sites and imaging tasks was assessed with anthropomorphic head, abdomen, and pelvis phantoms. RESULTS The dose for a slot-beam scan varied from 0.02-0.06 mGy/mAs for head protocols to 0.01-0.03 mGy/mAs for body protocols, yielding dose reduction by ∼1/5 to 1/3 compared to cone-beam, owing to beam collimation and reduced x-ray scatter. The slot-beam provided an ∼6-7× reduction in scatter-to-primary ratio (SPR) compared to the cone-beam, yielding SPR ∼20-80% for head and body without the grid and ∼7-30% with the grid. Compared to cone-beam scans at equivalent dose, slot-beam images exhibited an ∼2.5× increase in soft-tissue contrast-to-noise ratio (CNR) for both grid and gridless configurations. For slot-beam scans, a further ∼10-30% improvement in CNR was achieved when the grid was removed. Slot-beam imaging could benefit certain interventional scenarios in which improved visualization of soft tissues is required within a fairly narrow longitudinal region of interest ( ± 7 mm in z )--for example, checking the completeness of tumor resection, preservation of adjacent anatomy, or detection of complications (e.g., hemorrhage). While preserving existing capabilities for fluoroscopy and cone-beam CT, slot-beam scanning could enhance the utility of intraoperative imaging and provide a useful mode for safety and validation checks in image-guided surgery. CONCLUSIONS The 3D imaging performance and dose of a prototype slot-beam CT configuration on the O-arm™ system was investigated. Substantial improvements in soft-tissue image quality and reduction in radiation dose are evident with the slot-beam configuration due to reduced x-ray scatter.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Craig K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland, USA
| | - Sheng-Fu L Lo
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, Maryland, USA
| | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, Maryland, USA
| | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, Maryland, USA
| | | | | | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Neurosurgery, Johns Hopkins Medical Institute, Baltimore, Maryland, USA
| |
Collapse
|
47
|
Huang Y, Uneri A, Jones CK, Zhang X, Ketcha MD, Aygun N, Helm PA, Siewerdsen JH. 3D vertebrae labeling in spine CT: an accurate, memory-efficient (Ortho2D) framework. Phys Med Biol 2021; 66. [PMID: 34082413 DOI: 10.1088/1361-6560/ac07c7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 06/03/2021] [Indexed: 11/11/2022]
Abstract
Purpose.Accurate localization and labeling of vertebrae in computed tomography (CT) is an important step toward more quantitative, automated diagnostic analysis and surgical planning. In this paper, we present a framework (called Ortho2D) for vertebral labeling in CT in a manner that is accurate and memory-efficient.Methods. Ortho2D uses two independent faster R-convolutional neural network networks to detect and classify vertebrae in orthogonal (sagittal and coronal) CT slices. The 2D detections are clustered in 3D to localize vertebrae centroids in the volumetric CT and classify the region (cervical, thoracic, lumbar, or sacral) and vertebral level. A post-process sorting method incorporates the confidence in network output to refine classifications and reduce outliers. Ortho2D was evaluated on a publicly available dataset containing 302 normal and pathological spine CT images with and without surgical instrumentation. Labeling accuracy and memory requirements were assessed in comparison to other recently reported methods. The memory efficiency of Ortho2D permitted extension to high-resolution CT to investigate the potential for further boosts to labeling performance.Results. Ortho2D achieved overall vertebrae detection accuracy of 97.1%, region identification accuracy of 94.3%, and individual vertebral level identification accuracy of 91.0%. The framework achieved 95.8% and 83.6% level identification accuracy in images without and with surgical instrumentation, respectively. Ortho2D met or exceeded the performance of previously reported 2D and 3D labeling methods and reduced memory consumption by a factor of ∼50 (at 1 mm voxel size) compared to a 3D U-Net, allowing extension to higher resolution datasets than normally afforded. The accuracy of level identification increased from 80.1% (for standard/low resolution CT) to 95.1% (for high-resolution CT).Conclusions. The Ortho2D method achieved vertebrae labeling performance that is comparable to other recently reported methods with significant reduction in memory consumption, permitting further performance boosts via application to high-resolution CT.
Collapse
Affiliation(s)
- Y Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - M D Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | - N Aygun
- Department of Radiology, Johns Hopkins University, Baltimore MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton MA, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America.,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore MD, United States of America.,Department of Radiology, Johns Hopkins University, Baltimore MD, United States of America
| |
Collapse
|
48
|
Vijayan RC, Han R, Wu P, Sheth NM, Ketcha MD, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH, Uneri A. Development of a fluoroscopically guided robotic assistant for instrument placement in pelvic trauma surgery. J Med Imaging (Bellingham) 2021; 8:035001. [PMID: 34124283 PMCID: PMC8189698 DOI: 10.1117/1.jmi.8.3.035001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 05/21/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: A method for fluoroscopic guidance of a robotic assistant is presented for instrument placement in pelvic trauma surgery. The solution uses fluoroscopic images acquired in standard clinical workflow and helps avoid repeat fluoroscopy commonly performed during implant guidance. Approach: Images acquired from a mobile C-arm are used to perform 3D-2D registration of both the patient (via patient CT) and the robot (via CAD model of a surgical instrument attached to its end effector, e.g; a drill guide), guiding the robot to target trajectories defined in the patient CT. The proposed approach avoids C-arm gantry motion, instead manipulating the robot to acquire disparate views of the instrument. Phantom and cadaver studies were performed to determine operating parameters and assess the accuracy of the proposed approach in aligning a standard drill guide instrument. Results: The proposed approach achieved average drill guide tip placement accuracy of 1.57 ± 0.47 mm and angular alignment of 0.35 ± 0.32 deg in phantom studies. The errors remained within 2 mm and 1 deg in cadaver experiments, comparable to the margins of errors provided by surgical trackers (but operating without the need for external tracking). Conclusions: By operating at a fixed fluoroscopic perspective and eliminating the need for encoded C-arm gantry movement, the proposed approach simplifies and expedites the registration of image-guided robotic assistants and can be used with simple, non-calibrated, non-encoded, and non-isocentric C-arm systems to accurately guide a robotic device in a manner that is compatible with the surgical workflow.
Collapse
Affiliation(s)
- Rohan C. Vijayan
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Runze Han
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Pengwei Wu
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Niral M. Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Michael D. Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | | | | | - Greg M. Osgood
- Johns Hopkins Medicine, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
49
|
Wu P, Boone JM, Hernandez AM, Mahesh M, Siewerdsen JH. Theory, method, and test tools for determination of 3D MTF characteristics in cone-beam CT. Med Phys 2021; 48:2772-2789. [PMID: 33660261 DOI: 10.1002/mp.14820] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The modulation transfer function (MTF) is widely used as an objective metric of spatial resolution of medical imaging systems. Despite advances in capability for three-dimensional (3D) isotropic spatial resolution in computed tomography (CT) and cone-beam CT (CBCT), MTF evaluation for such systems is typically reported only in the axial plane, and practical methodology for assessment of fully 3D spatial resolution characteristics is lacking. This work reviews fundamental theoretical relationships of two-dimensional (2D) and 3D spread functions and reports practical methods and test tools for analysis of 3D MTF in CBCT. METHODS Fundamental aspects of 2D and 3D MTF measurement are reviewed within a common notational framework, and three MTF test tools with analysis code are reported and made available online (https://istar.jhu.edu/downloads/): (a) a multi-wire tool for measurement of the axial plane MTF [denoted as M T F ( f r ; φ = 0 ∘ ) , where φ is the measurement angle out of the axial plane] as a function of position in the axial plane; (b) a wedge tool for measurement of the MTF in any direction in the 3D Fourier domain [e.g., φ = 45°, denoted as M T F ( f r ; φ = 45 ∘ ) ]; and (c) a sphere tool for measurement of the MTF in any or all directions in the 3D Fourier domain. Experiments were performed on a mobile C-arm with CBCT capability, showing that M T F ( f r ; φ = 45 ∘ ) yields an informative one-dimensional (1D) representation of the overall 3D spatial resolution characteristics, capturing important characteristics of the 3D MTF that might be missed in conventional analysis. The effects of anisotropic filters and detector readout mode were investigated, and the extent to which a system can be said to provide "isotropic" resolution was evaluated by quantitative comparison of MTF at various φ . RESULTS All three test tools provided consistent measurement of M T F ( f r ; φ = 0 ∘ ) , and the wedge and sphere tools demonstrated how measurement of the MTF in directions outside the axial plane ( φ > 0 ∘ ) can reveal spatial resolution characteristics to which conventional axial MTF measurement is blind. The wedge tool was shown to reduce statistical measurement error compared to the sphere tool due to improved sampling, and the sphere tool was shown to provide a basis for measurement of the MTF in any or all directions (outside the null cone) from a single scan. The C-arm system exhibited non-isotropic spatial resolution with conventional non-isotropic 1D apodization filters (i.e., frequency cutoff filters) - which is common in CBCT - and implementation of isotropic 2D apodization yielded quantifiably isotropic MTF. Asymmetric pixel binning modes were similarly shown to impart non-isotropic effects on the 3D MTF, and the overall 3D MTF characteristics were evident in each case with a single, 1D measurement of the 1D M T F ( f r ; φ = 45 ∘ ). CONCLUSION Three test tools and corresponding MTF analysis methods were presented within a consistent framework for analysis of 3D spatial resolution characteristics in a manner amenable to routine, practical measurements. Experiments on a CBCT C-arm validated many intuitive aspects of 3D spatial resolution and quantified the extent to which a CBCT system may be considered to have isotropic resolution. Measurement of M T F ( f r ; φ = 45 ∘ ) provided a practical 1D measure of the underlying 3D MTF characteristics and is extensible to other CT or CBCT systems offering high, isotropic spatial resolution.
Collapse
Affiliation(s)
- Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - John M Boone
- Department of Radiology, University of California, Davis, Davis, CA, 95616, USA
| | - Andrew M Hernandez
- Department of Radiology, University of California, Davis, Davis, CA, 95616, USA
| | - Mahadevappa Mahesh
- Department of Radiology, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA.,Department of Radiology, Johns Hopkins University, Baltimore, MD, 21205, USA
| |
Collapse
|
50
|
Hernandez AM, Wu P, Mahesh M, Siewerdsen JH, Boone JM. Location and direction dependence in the 3D MTF for a high-resolution CT system. Med Phys 2021; 48:2760-2771. [PMID: 33608927 DOI: 10.1002/mp.14789] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 12/23/2020] [Accepted: 02/09/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE The purpose of this study was to quantify location and direction-dependent variations in the 3D modulation transfer function (MTF) of a high-resolution CT scanner with selectable focal spot sizes and resolution modes. METHODS The Aquilion Precision CT scanner (Canon Medical Systems) has selectable 0.25 mm or 0.5 mm detectors (by binning) in both the axial (x-y) and detector array width (z) directions. For the x-y and z orientations, detectors are configured (x-y) = 0.5 mm/(z) = 0.5 mm for normal resolution (NR), 0.25/0.5 mm for high resolution (HR), and 0.25/0.25 mm for super high resolution (SHR). Six focal spots (FS1-FS6) range in size from 0.4 (x-y) × 0.5 mm (z) for FS1 to 1.6 × 1.4 mm for FS6. Phantoms fabricated from spherical objects were positioned at radial distances of 0, 4.0, 7.5, 11.0, 14.5, and 18.5 cm. Axial and helical acquisitions were utilized and reconstructed using filtered back projection with the FC18 "Body," FC30 "Bone," and FC81 "Bone Sharp" kernels. The reconstructions were used to measure a 1D slice of the 3D MTF by oversampling the 3D ESF in the axial plane [MTF(fr ); φ = 0°)], 45° out of the axial plane [MTF(fr ); φ = 45°)], in the longitudinal direction [MTF(fr ); φ = 80°)], and along the radial and azimuthal directions within the axial plane. RESULTS The MTF(fr ); φ = 45°) drops to 10% (f10 ) at 1.20, 1.45, and 2.06 mm-1 for NR, HR, and SHR, respectively, for a helical acquisition with FS1, FC30, and r = 4 cm from the isocenter. The MTF(fr ); φ = 45°) includes contributions of both the axial-plane MTF (f10 = 1.10, 2.04, and 2.01 mm-1 ) and the longitudinal MTF (f10 = 1.17, 1.18, and 1.82 mm-1 ) for the NR, HR, and SHR modes, respectively. For SHR, the axial scan mode showed a 15-25% improvement over helical mode in the longitudinal resolution. Helical pitch, ranging from 0.569 to 1.381, did not appreciably affect the 3D resolution (<2%). The radial MTFs across the axial field of view (FOV) showed dependencies on the focal spot length in z; for example, for SHR with FS2 (0.6 × 0.6 mm), f10 at r = 11 cm was within 17% of the value at r = 4 cm, but for SHR with FS3 (0.6 × 1.3), the reduction in f10 was 46% from 4 to 11 cm from the isocenter. The azimuthal MTF also decreased as r increased but less so for longer gantry rotation times and smaller focal spot dimensions in the axial plane. The longitudinal MTF was minimally affected (<11%) by position in the FOV and was principally affected by the focal spot length in the z-dimension. CONCLUSIONS The 3D MTF was measured throughout the FOV of a high-resolution CT scanner, quantifying the advantages of different resolution modes and focal spot sizes on the axial-plane and longitudinal MTF. Reconstruction kernels were shown to impact axial-plane resolution, imparting non-isotropic 3D resolution characteristics. Focal spot size (both in x-y and in z) and gantry rotation time play important roles in preserving the high-resolution characteristics throughout the field of view for this new high-resolution CT scanner technology.
Collapse
Affiliation(s)
- Andrew M Hernandez
- Department of Radiology, University of California Davis, Sacramento, CA, 95817, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Mahadevappa Mahesh
- Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA.,Russell H. Morgan Department of Radiology, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - John M Boone
- Department of Radiology, University of California Davis, Sacramento, CA, 95817, USA
| |
Collapse
|