1
|
Burton W, Myers C, Stefanovic M, Shelburne K, Rullkoetter P. Scan-Free and Fully Automatic Tracking of Native Knee Anatomy from Dynamic Stereo-Radiography with Statistical Shape and Intensity Models. Ann Biomed Eng 2024; 52:1591-1603. [PMID: 38558356 DOI: 10.1007/s10439-024-03473-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 02/09/2024] [Indexed: 04/04/2024]
Abstract
Kinematic tracking of native anatomy from stereo-radiography provides a quantitative basis for evaluating human movement. Conventional tracking procedures require significant manual effort and call for acquisition and annotation of subject-specific volumetric medical images. The current work introduces a framework for fully automatic tracking of native knee anatomy from dynamic stereo-radiography which forgoes reliance on volumetric scans. The method consists of three computational steps. First, captured radiographs are annotated with segmentation maps and anatomic landmarks using a convolutional neural network. Next, a non-convex polynomial optimization problem formulated from annotated landmarks is solved to acquire preliminary anatomy and pose estimates. Finally, a global optimization routine is performed for concurrent refinement of anatomy and pose. An objective function is maximized which quantifies similarities between masked radiographs and digitally reconstructed radiographs produced from statistical shape and intensity models. The proposed framework was evaluated against manually tracked trials comprising dynamic activities, and additional frames capturing a static knee phantom. Experiments revealed anatomic surface errors routinely below 1.0 mm in both evaluation cohorts. Median absolute errors of individual bone pose estimates were below 1.0∘ or mm for 15 out of 18 degrees of freedom in both evaluation cohorts. Results indicate that accurate pose estimation of native anatomy from stereo-radiography may be performed with significantly reduced manual effort, and without reliance on volumetric scans.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA.
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Margareta Stefanovic
- Department of Electrical and Computer Engineering, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Kevin Shelburne
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| |
Collapse
|
2
|
Huang Y, Zhang X, Hu Y, Johnston AR, Jones CK, Zbijewski WB, Siewerdsen JH, Helm PA, Witham TF, Uneri A. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis. Comput Med Imaging Graph 2024; 114:102365. [PMID: 38471330 DOI: 10.1016/j.compmedimag.2024.102365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE Improved integration and use of preoperative imaging during surgery hold significant potential for enhancing treatment planning and instrument guidance through surgical navigation. Despite its prevalent use in diagnostic settings, MR imaging is rarely used for navigation in spine surgery. This study aims to leverage MR imaging for intraoperative visualization of spine anatomy, particularly in cases where CT imaging is unavailable or when minimizing radiation exposure is essential, such as in pediatric surgery. METHODS This work presents a method for deformable 3D-2D registration of preoperative MR images with a novel intraoperative long-length tomosynthesis imaging modality (viz., Long-Film [LF]). A conditional generative adversarial network is used to translate MR images to an intermediate bone image suitable for registration, followed by a model-based 3D-2D registration algorithm to deformably map the synthesized images to LF images. The algorithm's performance was evaluated on cadaveric specimens with implanted markers and controlled deformation, and in clinical images of patients undergoing spine surgery as part of a large-scale clinical study on LF imaging. RESULTS The proposed method yielded a median 2D projection distance error of 2.0 mm (interquartile range [IQR]: 1.1-3.3 mm) and a 3D target registration error of 1.5 mm (IQR: 0.8-2.1 mm) in cadaver studies. Notably, the multi-scale approach exhibited significantly higher accuracy compared to rigid solutions and effectively managed the challenges posed by piecewise rigid spine deformation. The robustness and consistency of the method were evaluated on clinical images, yielding no outliers on vertebrae without surgical instrumentation and 3% outliers on vertebrae with instrumentation. CONCLUSIONS This work constitutes the first reported approach for deformable MR to LF registration based on deep image synthesis. The proposed framework provides access to the preoperative annotations and planning information during surgery and enables surgical navigation within the context of MR images and/or dual-plane LF images.
Collapse
Affiliation(s)
- Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Ashley R Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
3
|
Zhu J, Wang C, Zhang Y, Zhan M, Zhao W, Teng S, Lu L, Teng GJ. 3D/2D Vessel Registration Based on Monte Carlo Tree Search and Manifold Regularization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1727-1739. [PMID: 38153820 DOI: 10.1109/tmi.2023.3347896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2023]
Abstract
The augmented intra-operative real-time imaging in vascular interventional surgery, which is generally performed by projecting preoperative computed tomography angiography images onto intraoperative digital subtraction angiography (DSA) images, can compensate for the deficiencies of DSA-based navigation, such as lack of depth information and excessive use of toxic contrast agents. 3D/2D vessel registration is the critical step in image augmentation. A 3D/2D registration method based on vessel graph matching is proposed in this study. For rigid registration, the matching of vessel graphs can be decomposed into continuous states, thus 3D/2D vascular registration is formulated as a search tree problem. The Monte Carlo tree search method is applied to find the optimal vessel matching associated with the highest rigid registration score. For nonrigid registration, we propose a novel vessel deformation model based on manifold regularization. This model incorporates the smoothness constraint of vessel topology into the objective function. Furthermore, we derive simplified gradient formulas that enable fast registration. The proposed technique undergoes evaluation against seven rigid and three nonrigid methods using a variety of data - simulated, algorithmically generated, and manually annotated - across three vascular anatomies: the hepatic artery, coronary artery, and aorta. Our findings show the proposed method's resistance to pose variations, noise, and deformations, outperforming existing methods in terms of registration accuracy and computational efficiency. The proposed method demonstrates average registration errors of 2.14 mm and 0.34 mm for rigid and nonrigid registration, and an average computation time of 0.51 s.
Collapse
|
4
|
Budge J, Carrell T, Yaqub M, Wafa H, Waltham M, Pilecka I, Kelly J, Murphy C, Palmer S, Wang Y, Clough RE. The ARIA trial protocol: a randomised controlled trial to assess the clinical, technical, and cost-effectiveness of a cloud-based, ARtificially Intelligent image fusion system in comparison to standard treatment to guide endovascular Aortic aneurysm repair. Trials 2024; 25:214. [PMID: 38528619 DOI: 10.1186/s13063-023-07710-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 10/06/2023] [Indexed: 03/27/2024] Open
Abstract
BACKGROUND Endovascular repair of aortic aneurysmal disease is established due to perceived advantages in patient survival, reduced postoperative complications, and shorter hospital lengths of stay. High spatial and contrast resolution 3D CT angiography images are used to plan the procedures and inform device selection and manufacture, but in standard care, the surgery is performed using image-guidance from 2D X-ray fluoroscopy with injection of nephrotoxic contrast material to visualise the blood vessels. This study aims to assess the benefit to patients, practitioners, and the health service of a novel image fusion medical device (Cydar EV), which allows this high-resolution 3D information to be available to operators at the time of surgery. METHODS The trial is a multi-centre, open label, two-armed randomised controlled clinical trial of 340 patient, randomised 1:1 to either standard treatment in endovascular aneurysm repair or treatment using Cydar EV, a CE-marked medical device comprising of cloud computing, augmented intelligence, and computer vision. The primary outcome is procedural time, with secondary outcomes of procedural efficiency, technical effectiveness, patient outcomes, and cost-effectiveness. Patients with a clinical diagnosis of AAA or TAAA suitable for endovascular repair and able to provide written informed consent will be invited to participate. DISCUSSION This trial is the first randomised controlled trial evaluating advanced image fusion technology in endovascular aortic surgery and is well placed to evaluate the effect of this technology on patient outcomes and cost to the NHS. TRIAL REGISTRATION ISRCTN13832085. Dec. 3, 2021.
Collapse
Affiliation(s)
- James Budge
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- St George's Vascular Institute, St George's University, London, UK
| | | | - Medeah Yaqub
- King's Clinical Trials Unit, King's College London, London, UK
| | - Hatem Wafa
- Department of Population Health Sciences, King's College London, London, UK
| | | | - Izabela Pilecka
- King's Clinical Trials Unit, King's College London, London, UK
| | - Joanna Kelly
- King's Clinical Trials Unit, King's College London, London, UK
| | - Caroline Murphy
- King's Clinical Trials Unit, King's College London, London, UK
| | - Stephen Palmer
- Centre for Health Economics, University of York, York, UK
| | - Yanzhong Wang
- Department of Population Health Sciences, King's College London, London, UK
| | - Rachel E Clough
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.
| |
Collapse
|
5
|
Kurniawan A, Alias A, Yusof MYPM, Marya A. Optimization of forensic identification through 3-dimensional imaging analysis of labial tooth surface using open-source software. Imaging Sci Dent 2024; 54:63-69. [PMID: 38571779 PMCID: PMC10985530 DOI: 10.5624/isd.20230218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 12/30/2023] [Accepted: 01/13/2024] [Indexed: 04/05/2024] Open
Abstract
Purpose The objective of this study was to determine the minimum number of teeth in the anterior dental arch that would yield accurate results for individual identification in forensic contexts. Materials and Methods The study involved the analysis of 28 sets of 3-dimensional (3D) point cloud data, focused on the labial surface of the anterior teeth. These datasets were superimposed within each group in both genuine and imposter pairs. Group A incorporated data from the right to the left central incisor, group B from the right to the left lateral incisor, and group C from the right to the left canine. A comprehensive analysis was conducted, including the evaluation of root mean square error (RMSE) values and the distances resulting from the superimposition of dental arch segments. All analyses were conducted using CloudCompare version 2.12.4 (Telecom ParisTech and R&D, Kyiv, Ukraine). Results The distances between genuine pairs in groups A, B, and C displayed an average range of 0.153 to 0.184 mm. In contrast, distances for imposter pairs ranged from 0.338 to 0.522 mm. RMSE values for genuine pairs showed an average range of 0.166 to 0.177, whereas those for imposter pairs ranged from 0.424 to 0.638. A statistically significant difference was observed between the distances of genuine and imposter pairs (P<0.05). Conclusion The exceptional performance observed for the labial surfaces of anterior teeth underscores their potential as a dependable criterion for accurate 3D dental identification. This was achieved by assessing a minimum of 4 teeth.
Collapse
Affiliation(s)
- Arofi Kurniawan
- Department of Forensic Odontology, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
| | - Aspalilah Alias
- Department of Basic Sciences and Oral Biology, Faculty of Dentistry, Universiti Sains Islam Malaysia, Malaysia
| | - Mohd Yusmiaidil Putera Mohd Yusof
- Center for Oral and Maxillofacial Diagnostics and Medicine Studies, Faculty of Dentistry, Universiti Teknologi MARA, Sungai Buloh Campus, Sungai Buloh, Malaysia
- Institute of Pathology, Laboratory and Forensic Medicine (I-PPerForM), Universiti Teknologi MARA, Sungai Buloh Campus, Sungai Buloh, Malaysia
| | - Anand Marya
- Department of Forensic Odontology, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
- Department of Orthodontics, University of Puthisastra Cambodia, Phnom Penh, Cambodia
| |
Collapse
|
6
|
Burton W, Myers C, Stefanovic M, Shelburne K, Rullkoetter P. Fully automatic tracking of native knee kinematics from stereo-radiography with digitally reconstructed radiographs. J Biomech 2024; 166:112066. [PMID: 38574563 DOI: 10.1016/j.jbiomech.2024.112066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 04/06/2024]
Abstract
Precise measurement of joint-level motion from stereo-radiography facilitates understanding of human movement. Conventional procedures for kinematic tracking require significant manual effort and are time intensive. The current work introduces a method for fully automatic tracking of native knee kinematics from stereo-radiography sequences. The framework consists of three computational steps. First, biplanar radiograph frames are annotated with segmentation maps and key points using a convolutional neural network. Next, initial bone pose estimates are acquired by solving a polynomial optimization problem constructed from annotated key points and anatomic landmarks from digitized models. A semidefinite relaxation is formulated to realize the global minimum of the non-convex problem. Pose estimates are then refined by registering computed tomography-based digitally reconstructed radiographs to masked radiographs. A novel rendering method is also introduced which enables generating digitally reconstructed radiographs from computed tomography scans with inconsistent slice widths. The automatic tracking framework was evaluated with stereo-radiography trials manually tracked with model-image registration, and with frames which capture a synthetic leg phantom. The tracking method produced pose estimates which were consistently similar to manually tracked values; and demonstrated pose errors below 1.0 degree or millimeter for all femur and tibia degrees of freedom in phantom trials. Results indicate the described framework may benefit orthopaedics and biomechanics applications through acceleration of kinematic tracking.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, 80208, CO, USA.
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, 80208, CO, USA.
| | - Margareta Stefanovic
- Department of Electrical and Computer Engineering, University of Denver, 2155 E Wesley Ave, Denver, 80208, CO, USA.
| | - Kevin Shelburne
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, 80208, CO, USA.
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, 80208, CO, USA.
| |
Collapse
|
7
|
Liawrungrueang W, Cho ST, Sarasombath P, Kim I, Kim JH. Current Trends in Artificial Intelligence-Assisted Spine Surgery: A Systematic Review. Asian Spine J 2024; 18:146-157. [PMID: 38130042 PMCID: PMC10910143 DOI: 10.31616/asj.2023.0410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 12/12/2023] [Accepted: 12/17/2023] [Indexed: 12/23/2023] Open
Abstract
This systematic review summarizes existing evidence and outlines the benefits of artificial intelligence-assisted spine surgery. The popularity of artificial intelligence has grown significantly, demonstrating its benefits in computer-assisted surgery and advancements in spinal treatment. This study adhered to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), a set of reporting guidelines specifically designed for systematic reviews and meta-analyses. The search strategy used Medical Subject Headings (MeSH) terms, including "MeSH (Artificial intelligence)," "Spine" AND "Spinal" filters, in the last 10 years, and English- from January 1, 2013, to October 31, 2023. In total, 442 articles fulfilled the first screening criteria. A detailed analysis of those articles identified 220 that matched the criteria, of which 11 were considered appropriate for this analysis after applying the complete inclusion and exclusion criteria. In total, 11 studies met the eligibility criteria. Analysis of these studies revealed the types of artificial intelligence-assisted spine surgery. No evidence suggests the superiority of assisted spine surgery with or without artificial intelligence in terms of outcomes. In terms of feasibility, accuracy, safety, and facilitating lower patient radiation exposure compared with standard fluoroscopic guidance, artificial intelligence-assisted spine surgery produced satisfactory and superior outcomes. The incorporation of artificial intelligence with augmented and virtual reality appears promising, with the potential to enhance surgeon proficiency and overall surgical safety.
Collapse
Affiliation(s)
| | - Sung Tan Cho
- Department of Orthopaedics, Inje University Ilsan Paik Hospital, Goyang,
Korea
| | - Peem Sarasombath
- Department of Orthopaedics, Faculty of Medicine, Chiang Mai University, Chiang Mai,
Thailand
| | - Inhee Kim
- Department of Orthopaedics, Police National Hospital, Seoul,
Korea
| | - Jin Hwan Kim
- Department of Orthopaedics, Inje University Ilsan Paik Hospital, Goyang,
Korea
| |
Collapse
|
8
|
Gao C, Feng A, Liu X, Taylor RH, Armand M, Unberath M. A Fully Differentiable Framework for 2D/3D Registration and the Projective Spatial Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:275-285. [PMID: 37549070 PMCID: PMC10879149 DOI: 10.1109/tmi.2023.3299588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/09/2023]
Abstract
Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.
Collapse
|
9
|
Smorenburg SPM, Lely RJ, Smit-Ockeloen I, Yeung KK, Hoksbergen AWJ. Automated image fusion during endovascular aneurysm repair: a feasibility and accuracy study. Int J Comput Assist Radiol Surg 2023; 18:1533-1541. [PMID: 36719561 PMCID: PMC10363050 DOI: 10.1007/s11548-023-02832-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 01/06/2023] [Indexed: 02/01/2023]
Abstract
PURPOSE Image fusion merges preoperative computed tomography angiography (CTA) with live fluoroscopy during endovascular procedures to function as an overlay 3D roadmap. However, in most current systems, the registration between imaging modalities is performed manually by vertebral column matching which can be subjective, inaccurate and time consuming depending on experience. Our objective was to evaluate feasibility and accuracy of image-based automated 2D-3D image fusion between preoperative CTA and intraoperative fluoroscopy based on vertebral column matching. METHODS A single-center study with offline procedure data was conducted in 10 consecutive patients which had endovascular aortic repair in which we evaluated unreleased automated fusion software provided by Philips (Best, the Netherlands). Fluoroscopy and digital subtraction angiography images were collected after the procedures and the vertebral column was fused fully automatically. Primary endpoints were feasibility and accuracy of bone alignment (mm). Secondary endpoint was vascular alignment (mm) between the lowest renal artery orifices. Clinical non-inferiority was defined at a mismatch of < 1 mm. RESULTS In total, 87 automated measurements and 40 manual measurements were performed on vertebrae T12-L5 in all 10 patients. Manual correction was needed in 3 of the 10 patients due to incomplete visibility of the vertebral edges in the fluoroscopy image. Median difference between automated fusion and manual fusion was 0.1 mm for bone alignment (p = 0.94). The vascular alignment was 4.9 mm (0.7-17.5 mm) for manual and 5.5 mm (1.0-14.0 mm) for automated fusion. This did not improve, due to the presence of stiff wires and stent graft. CONCLUSION Automated image fusion was feasible when all vertebral edges were visible. Accuracy was non-inferior to manual image fusion regarding bone alignment. Future developments should focus on intraoperative image-based correction of vascular alignment.
Collapse
Affiliation(s)
- Stefan P M Smorenburg
- Department of Surgery, Amsterdam University Medical Centers, Vrije Universiteit, Room J1A-222, Postbox 22660, 1100 DD, Amsterdam, The Netherlands.
- Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands.
| | - Rutger J Lely
- Department of Radiology, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, The Netherlands
- Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands
| | | | - Kak Khee Yeung
- Department of Surgery, Amsterdam University Medical Centers, Vrije Universiteit, Room J1A-222, Postbox 22660, 1100 DD, Amsterdam, The Netherlands
- Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands
| | - Arjan W J Hoksbergen
- Department of Surgery, Amsterdam University Medical Centers, Vrije Universiteit, Room J1A-222, Postbox 22660, 1100 DD, Amsterdam, The Netherlands
- Amsterdam Cardiovascular Sciences, Amsterdam, The Netherlands
| |
Collapse
|
10
|
Patel RJ, Lee AM, Hallsten J, Lane JS, Barleben AR, Malas MB. Use of surgical augmented intelligence maps can reduce radiation and improve safety in the endovascular treatment of complex aortic aneurysms. J Vasc Surg 2023; 77:982-990.e2. [PMID: 36581011 DOI: 10.1016/j.jvs.2022.12.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 12/08/2022] [Accepted: 12/11/2022] [Indexed: 12/27/2022]
Abstract
OBJECTIVE The introduction of endovascular procedures has revolutionized the management of complex aortic aneurysms. Although repair has traditionally required longer operative times and increased radiation exposure compared with simple endovascular aneurysm repair, the recent introduction of three-dimensional technology has become an invaluable operative adjunct. Surgical augmented intelligence (AI) is a rapidly evolving tool initiated at our institution in June 2019. In our study, we sought to determine whether this technology improved patient and operator safety. METHODS A retrospective review of patients who had undergone endovascular repair of complex aortic aneurysms (pararenal, juxtarenal, or thoracoabdominal), type B dissection, or infrarenal (endoleak, coil placement, or renal angiography with or without intervention) at a tertiary care center from August 2015 to November 2021 was performed. Patients were stratified according to the findings from intelligent maps, which are patient-specific AI tools used in the operating room in conjunction with real-time fluoroscopic images. The primary outcomes included operative time, radiation exposure, fluoroscopy time, and contrast use. The secondary outcomes included 30-day postoperative complications and long-term follow-up. Linear regression models were used to evaluate the association between AI use and the main outcomes. RESULTS During the 6-year period, 116 patients were included in the present study, with no significant differences in the baseline characteristics. Of the 116 patients, 76 (65.5%) had undergone procedures using AI and 40 (34.5%) had undergone procedures without AI software. The intraoperative outcomes revealed a significant decrease in radiation exposure (AI group, 1955 mGy; vs non-AI group, 3755 mGy; P = .004), a significant decrease in the fluoroscopy time (AI group, 55.6 minutes; vs non-AI group, 86.9 minutes; P = .007), a decrease in the operative time (AI group, 255 minutes; vs non-AI group, 284 minutes; P = .294), and a significant decrease in contrast use (AI group, 123 mL; vs non-AI group, 199 mL; P < .0001). No differences were found in the 30-day and long-term outcomes. CONCLUSIONS The results from the present study have demonstrated that the use of AI technology combined with intraoperative imaging can significantly facilitate complex endovascular aneurysm repair by decreasing the operative time, radiation exposure, fluoroscopy time, and contrast use. Overall, evolving technology such as AI has improved radiation safety for both the patient and the entire operating room team.
Collapse
Affiliation(s)
- Rohini J Patel
- Division of Vascular and Endovascular Surgery, Department of Surgery, University of California San Diego, San Diego, CA
| | - Arielle M Lee
- Division of Vascular and Endovascular Surgery, Department of Surgery, University of California San Diego, San Diego, CA
| | - John Hallsten
- Division of Vascular and Endovascular Surgery, Department of Surgery, University of California San Diego, San Diego, CA
| | - John S Lane
- Division of Vascular and Endovascular Surgery, Department of Surgery, University of California San Diego, San Diego, CA
| | - Andrew R Barleben
- Division of Vascular and Endovascular Surgery, Department of Surgery, University of California San Diego, San Diego, CA
| | - Mahmoud B Malas
- Division of Vascular and Endovascular Surgery, Department of Surgery, University of California San Diego, San Diego, CA.
| |
Collapse
|
11
|
Willey MC, Kern AM, Goetz JE, Marsh JL, Anderson DD. Biomechanical guidance can improve accuracy of reduction for intra-articular tibia plafond fractures and reduce joint contact stress. J Orthop Res 2023; 41:546-554. [PMID: 35672888 PMCID: PMC9726992 DOI: 10.1002/jor.25393] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 05/15/2022] [Accepted: 05/26/2022] [Indexed: 02/04/2023]
Abstract
Articular fracture malreduction increases posttraumatic osteoarthritis (PTOA) risk by elevating joint contact stress. A new biomechanical guidance system (BGS) that provides intraoperative assessment of articular fracture reduction and joint contact stress based solely on a preoperative computed tomography (CT) and intraoperative fluoroscopy may facilitate better fracture reduction. The objective of this proof-of-concept cadaveric study was to test this premise while characterizing BGS performance. Articular tibia plafond fractures were created in five cadaveric ankles. CT scans were obtained to provide digital models. Indirect reduction was performed in a simulated operating room once with and once without BGS guidance. CT scans after fixation provided models of the reduced ankles for assessing reduction accuracy, joint contact stresses, and BGS accuracy. BGS was utilized 4.8 ± 1.3 (mean ± SD) times per procedure, increasing operative time by 10 min (39%), and the number of fluoroscopy images by 31 (17%). Errors in BGS reduction assessment compared to CT-derived models were 0.45 ± 0.57 mm in translation and 2.0 ± 2.5° in rotation. For the four ankles that were successfully reduced and fixed, associated absolute errors in computed mean and maximum contact stress were 0.40 ± 0.40 and 0.96 ± 1.12 MPa, respectively. BGS reduced mean and maximum contact stress by 1.1 and 2.6 MPa, respectively. BGS thus improved the accuracy of articular fracture reduction and significantly reduced contact stress. Statement of Clinical Significance: Malreduction of articular fractures is known to lead to PTOA. The BGS described in this work has potential to improve quality of articular fracture reduction and clinical outcomes for patients with a tibia plafond fracture.
Collapse
Affiliation(s)
- Michael C Willey
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
| | - Andrew M Kern
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
| | - Jessica E Goetz
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
| | - John Lawrence Marsh
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
| | - Donald D Anderson
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, Iowa, USA
- Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Industrial and Systems Engineering, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
12
|
Jecklin S, Jancik C, Farshad M, Fürnstahl P, Esfandiari H. X23D-Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data. J Imaging 2022; 8:271. [PMID: 36286365 PMCID: PMC9604813 DOI: 10.3390/jimaging8100271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 09/07/2022] [Accepted: 09/27/2022] [Indexed: 11/16/2022] Open
Abstract
Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients' lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow.
Collapse
Affiliation(s)
- Sascha Jecklin
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Carla Jancik
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| |
Collapse
|
13
|
Nguyen V, Alves Pereira LF, Liang Z, Mielke F, Van Houtte J, Sijbers J, De Beenhouwer J. Automatic landmark detection and mapping for 2D/3D registration with BoneNet. Front Vet Sci 2022; 9:923449. [PMID: 36061115 PMCID: PMC9434378 DOI: 10.3389/fvets.2022.923449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
The 3D musculoskeletal motion of animals is of interest for various biological studies and can be derived from X-ray fluoroscopy acquisitions by means of image matching or manual landmark annotation and mapping. While the image matching method requires a robust similarity measure (intensity-based) or an expensive computation (tomographic reconstruction-based), the manual annotation method depends on the experience of operators. In this paper, we tackle these challenges by a strategic approach that consists of two building blocks: an automated 3D landmark extraction technique and a deep neural network for 2D landmarks detection. For 3D landmark extraction, we propose a technique based on the shortest voxel coordinate variance to extract the 3D landmarks from the 3D tomographic reconstruction of an object. For 2D landmark detection, we propose a customized ResNet18-based neural network, BoneNet, to automatically detect geometrical landmarks on X-ray fluoroscopy images. With a deeper network architecture in comparison to the original ResNet18 model, BoneNet can extract and propagate feature vectors for accurate 2D landmark inference. The 3D poses of the animal are then reconstructed by aligning the extracted 2D landmarks from X-ray radiographs and the corresponding 3D landmarks in a 3D object reference model. Our proposed method is validated on X-ray images, simulated from a real piglet hindlimb 3D computed tomography scan and does not require manual annotation of landmark positions. The simulation results show that BoneNet is able to accurately detect the 2D landmarks in simulated, noisy 2D X-ray images, resulting in promising rigid and articulated parameter estimations.
Collapse
Affiliation(s)
- Van Nguyen
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- *Correspondence: Van Nguyen
| | - Luis F. Alves Pereira
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- Departamento de Ciência da Computação, Universidade Federal do Agreste de Pernambuco, Garanhuns, Brazil
| | - Zhihua Liang
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Falk Mielke
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- Department of Biology, University of Antwerp, Antwerp, Belgium
| | - Jeroen Van Houtte
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Jan Sijbers
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Jan De Beenhouwer
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
14
|
Naik RR, Bhat SN, Ampar N, Kundangar R. Realistic C-arm to pCT registration for vertebral localization in spine surgery. Med Biol Eng Comput 2022; 60:2271-2289. [PMID: 35680729 PMCID: PMC9294032 DOI: 10.1007/s11517-022-02600-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 04/28/2022] [Indexed: 11/29/2022]
Abstract
Abstract Spine surgeries are vulnerable to wrong-level surgeries and postoperative complications because of their complex structure. Unavailability of the 3D intraoperative imaging device, low-contrast intraoperative X-ray images, variable clinical and patient conditions, manual analyses, lack of skilled technicians, and human errors increase the chances of wrong-site or wrong-level surgeries. State of the art work refers 3D-2D image registration systems and other medical image processing techniques to address the complications associated with spine surgeries. Intensity-based 3D-2D image registration systems had been widely practiced across various clinical applications. However, these frameworks are limited to specific clinical conditions such as anatomy, dimension of image correspondence, and imaging modalities. Moreover, there are certain prerequisites for these frameworks to function in clinical application, such as dataset requirement, speed of computation, requirement of high-end system configuration, limited capture range, and multiple local maxima. A simple and effective registration framework was designed with a study objective of vertebral level identification and its pose estimation from intraoperative fluoroscopic images by combining intensity-based and iterative control point (ICP)–based 3D-2D registration. A hierarchical multi-stage registration framework was designed that comprises coarse and finer registration. The coarse registration was performed in two stages, i.e., intensity similarity-based spatial localization and source-to-detector localization based on the intervertebral distance correspondence between vertebral centroids in projected and intraoperative X-ray images. Finally, to speed up target localization in the intraoperative application, based on 3D-2D vertebral centroid correspondence, a rigid ICP-based finer registration was performed. The mean projection distance error (mPDE) measurement and visual similarity between projection image at finer registration point and intraoperative X-ray image and surgeons’ feedback were held accountable for the quality assurance of the designed registration framework. The average mPDE after peak signal to noise ratio (PSNR)–based coarse registration was 20.41mm. After the coarse registration in spatial region and source to detector direction, the average mPDE reduced to 12.18mm. On finer ICP-based registration, the mean mPDE was finally reduced to 0.36 mm. The approximate mean time required for the coarse registration, finer registration, and DRR image generation at the final registration point were 10 s, 15 s, and 1.5 min, respectively. The designed registration framework can act as a supporting tool for vertebral level localization and its pose estimation in an intraoperative environment. The framework was designed with the future perspective of intraoperative target localization and its pose estimation irrespective of the target anatomy. Graphical abstract ![]()
Collapse
Affiliation(s)
- Roshan Ramakrishna Naik
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Shyamasunder N Bhat
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Nishanth Ampar
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Raghuraj Kundangar
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| |
Collapse
|
15
|
Unberath M, Gao C, Hu Y, Judish M, Taylor RH, Armand M, Grupp R. The Impact of Machine Learning on 2D/3D Registration for Image-Guided Interventions: A Systematic Review and Perspective. Front Robot AI 2021; 8:716007. [PMID: 34527706 PMCID: PMC8436154 DOI: 10.3389/frobt.2021.716007] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 07/30/2021] [Indexed: 11/13/2022] Open
Abstract
Image-based navigation is widely considered the next frontier of minimally invasive surgery. It is believed that image-based navigation will increase the access to reproducible, safe, and high-precision surgery as it may then be performed at acceptable costs and effort. This is because image-based techniques avoid the need of specialized equipment and seamlessly integrate with contemporary workflows. Furthermore, it is expected that image-based navigation techniques will play a major role in enabling mixed reality environments, as well as autonomous and robot-assisted workflows. A critical component of image guidance is 2D/3D registration, a technique to estimate the spatial relationships between 3D structures, e.g., preoperative volumetric imagery or models of surgical instruments, and 2D images thereof, such as intraoperative X-ray fluoroscopy or endoscopy. While image-based 2D/3D registration is a mature technique, its transition from the bench to the bedside has been restrained by well-known challenges, including brittleness with respect to optimization objective, hyperparameter selection, and initialization, difficulties in dealing with inconsistencies or multiple objects, and limited single-view performance. One reason these challenges persist today is that analytical solutions are likely inadequate considering the complexity, variability, and high-dimensionality of generic 2D/3D registration problems. The recent advent of machine learning-based approaches to imaging problems that, rather than specifying the desired functional mapping, approximate it using highly expressive parametric models holds promise for solving some of the notorious challenges in 2D/3D registration. In this manuscript, we review the impact of machine learning on 2D/3D registration to systematically summarize the recent advances made by introduction of this novel technology. Grounded in these insights, we then offer our perspective on the most pressing needs, significant open problems, and possible next steps.
Collapse
Affiliation(s)
- Mathias Unberath
- Advanced Robotics and Computationally Augmented Environments (ARCADE) Lab, Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | | | | | | | | | | | | |
Collapse
|
16
|
Yang K, Luo Y, Zhao Y, Su S, Qu D, Zhao X, Song G. A novel 2D/3D hierarchical registration framework via principal-directional Fourier transform operator. Phys Med Biol 2021; 66:065030. [PMID: 33631735 DOI: 10.1088/1361-6560/abe9f5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
An effective registration framework between preoperative 3D computed tomography and intraoperative 2D x-ray images is crucial in image-guided therapy. In this paper, a novel 2D/3D hierarchical registration framework via principal-directional Fourier transform operator (HRF-PDFTO) is proposed. First, a PDFTO was established to obtain the in-plane translation and rotation invariance. Then, an initial free template-matching approach based on PDFTO was utilized to avoid initial value assignment and expand the capture range of registration. Finally, the hierarchical registration framework, HRF-PDFTO, was proposed to reduce the dimensions of the registration search space from n 6 to n 2. The experimental results demonstrated that the proposed HRF-PDFTO has good performance with an accuracy of 0.72 mm, and a single registration time of 16 s, which improves the registration efficiency by ten times. Consequently, the HRF-PDFTO can meet the accuracy and efficiency requirements of 2D/3D registration in related clinical applications.
Collapse
Affiliation(s)
- Keke Yang
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Yang Luo
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Yiwen Zhao
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Shun Su
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Danyang Qu
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Xingang Zhao
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Guoli Song
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,The Liaoning Medical Surgery and Rehabilitation Robot Engineering Research Center, Shenyang 110134, People's Republic of China
| |
Collapse
|
17
|
Zhu J, Li H, Ai D, Yang Q, Fan J, Huang Y, Song H, Han Y, Yang J. Iterative closest graph matching for non-rigid 3D/2D coronary arteries registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 199:105901. [PMID: 33360681 DOI: 10.1016/j.cmpb.2020.105901] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 12/05/2020] [Indexed: 06/12/2023]
Abstract
Background and objective Fusion of the preoperative computed tomography angiography and intraoperative X-ray angiography images can considerably enhance the visual perception of physicians during percutaneous coronary interventions. This technique can provide 3D information of the arteries and reduce the uncertainty of 2D guidance images. For this purpose, 3D/2D vascular registration with high accuracy and robustness is crucial for performing accurate surgery. Methods In this study, we propose an iterative closest graph matching (ICGM) method that utilizes an alternative iteration framework including correspondence and transformation phases. A coarse-to-fine matching approach based on redundant graph matching is proposed for the correspondence phase. The transformation phase involves rigid and non-rigid transformations, in which rigid transformation is calculated using a closed-form solution, and non-rigid transformation is achieved using a statistical shape model established from a synthetic deformation dataset. Results The proposed method is evaluated and compared with nine state-of-the-art methods on simulated data and clinical datasets. Experiments demonstrate that our method is insensitive to the pose of data and robust to noise and deformation. Moreover, it outperforms other methods in terms of registering real data. Conclusions Given its high capture range, the proposed method can register 3D vessels without prior initialization in clinical practice.
Collapse
Affiliation(s)
- Jianjun Zhu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Heng Li
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| | - Qi Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yong Huang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Yechen Han
- Department of Cardiology, Peking Union Medical College Hospital, Beijing 100730, China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
18
|
Doelare SAN, Smorenburg SPM, van Schaik TG, Blankensteijn JD, Wisselink W, Nederhoed JH, Lely RJ, Hoksbergen AWJ, Yeung KK. Image Fusion During Standard and Complex Endovascular Aortic Repair, to Fuse or Not to Fuse? A Meta-analysis and Additional Data From a Single-Center Retrospective Cohort. J Endovasc Ther 2020; 28:78-92. [PMID: 32964768 PMCID: PMC7816548 DOI: 10.1177/1526602820960444] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
PURPOSE To determine if image fusion will reduce contrast volume, radiation dose, and fluoroscopy and procedure times in standard and complex (fenestrated/branched) endovascular aneurysm repair (EVAR). MATERIALS AND METHODS A search of the PubMed, Embase, and Cochrane databases was performed in December 2019 to identify articles describing results of standard and complex EVAR procedures using image fusion compared with a control group. Study selection, data extraction, and assessment of the methodological quality of the included publications were performed by 2 reviewers working independently. Primary outcomes of the pooled analysis were contrast volume, fluoroscopy time, radiation dose, and procedure time. Eleven articles were identified comprising 1547 patients. Data on 140 patients satisfying the study inclusion criteria were added from the authors' center. Mean differences (MDs) are presented with the 95% confidence interval (CI). RESULTS For standard EVAR, contrast volume and procedure time showed a significant reduction with an MD of -29 mL (95% CI -40.5 to -18.5, p<0.001) and -11 minutes (95% CI -21.0 to -1.8, p<0.01), respectively. For complex EVAR, significant reductions in favor of image fusion were found for contrast volume (MD -79 mL, 95% CI -105.7 to -52.4, p<0.001), fluoroscopy time (MD -14 minutes, 95% CI -24.2 to -3.5, p<0.001), and procedure time (MD -52 minutes, 95% CI -75.7 to -27.9, p<0.001). CONCLUSION The results of this meta-analysis confirm that image fusion significantly reduces contrast volume, fluoroscopy time, and procedure time in complex EVAR but only contrast volume and procedure time for standard EVAR. Though a reduction was suggested, the radiation dose was not significantly affected by the use of fusion imaging in either standard or complex EVAR.
Collapse
Affiliation(s)
- Sabrina A N Doelare
- Department of Surgery, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Stefan P M Smorenburg
- Department of Surgery, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Theodorus G van Schaik
- Department of Surgery, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Jan D Blankensteijn
- Department of Surgery, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Willem Wisselink
- Department of Surgery, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Johanna H Nederhoed
- Department of Surgery, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Rutger J Lely
- Department of Radiology, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Arjan W J Hoksbergen
- Department of Surgery, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Kak Khee Yeung
- Department of Surgery, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands.,Department of Physiology, Amsterdam Cardiovascular Sciences, Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| |
Collapse
|
19
|
De Boer SW, Heinen SGH, Goudeketting SR, De Haan MW, Mees BM, Van Den Heuvel DAF, De Vries JPPM. Novel diagnostic and imaging techniques in endovascular iliac artery procedures. Expert Rev Cardiovasc Ther 2020; 18:395-404. [PMID: 32544005 DOI: 10.1080/14779072.2020.1780916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
INTRODUCTION Endovascular revascularization has become the preferred treatment for most patients with iliac artery obstructions, with a high rate of clinical and technical success. AREAS COVERED This review will describe novel developments in the diagnosis and treatment of iliac artery obstructions including the augmentation of preprocedural imaging with advanced flow models, image fusion techniques, and state-of-the-art device-tracking capabilities. EXPERT OPINION The combination of these developments will change the endovascular field within the next 5 years, allowing targeted iliac treatment without the need for radiographic imaging or iodinated contrast media.
Collapse
Affiliation(s)
- Sanne W De Boer
- Department of Radiology, Maastricht University Medical Center+ , Maastricht, The Netherlands.,CARIM School for Cardiovascular Diseases, Maastricht University , Maastricht, The Netherlands
| | - Stefan G H Heinen
- Department of Radiology, St. Antonius Hospital , Nieuwegein, The Netherlands
| | | | - Michiel W De Haan
- Department of Radiology, Maastricht University Medical Center+ , Maastricht, The Netherlands.,CARIM School for Cardiovascular Diseases, Maastricht University , Maastricht, The Netherlands
| | - Barend M Mees
- CARIM School for Cardiovascular Diseases, Maastricht University , Maastricht, The Netherlands.,Department of Vascular Surgery, Maastricht University Medical Center+ , Maastricht, The Netherlands
| | | | - Jean-Paul P M De Vries
- Department of Surgery, Division of Vascular Surgery, University Medical Center Groningen , Groningen, The Netherlands
| |
Collapse
|
20
|
Zhu J, Fan J, Guo S, Ai D, Song H, Wang C, Zhou S, Yang J. Heuristic tree searching for pose-independent 3D/2D rigid registration of vessel structures. ACTA ACUST UNITED AC 2020; 65:055010. [DOI: 10.1088/1361-6560/ab6b43] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
21
|
Towards Automated Spine Mobility Quantification: A Locally Rigid CT to X-ray Registration Framework. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279937 DOI: 10.1007/978-3-030-50120-4_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Different pathologies of the vertebral column, such as scoliosis, require quantification of the mobility of individual vertebrae or of curves of the spine for treatment planning. Without the necessary mobility, vertebrae can not be safely re-positioned and fused. The current clinical workflow consists of radiologists or surgeons estimating angular differences of neighbouring vertebrae from different x-ray images. This procedure is time consuming and prone to inaccuracy. The proposed method automates this quantification by deforming a CT image in a physiologically reasonable way and matching it to the x-ray images of interest. We propose a proof of concept evaluation on synthetic data. The automatic and quantitative analysis enables reproducible results independent of the investigator.
Collapse
|
22
|
Thomas S, Isensee F, Kohl S, Privalov M, Beisemann N, Swartman B, Keil H, Vetter SY, Franke J, Grützner PA, Maier-Hein L, Nolden M, Maier-Hein K. Computer-assisted intra-operative verification of surgical outcome for the treatment of syndesmotic injuries through contralateral side comparison. Int J Comput Assist Radiol Surg 2019; 14:2211-2220. [PMID: 31392672 DOI: 10.1007/s11548-019-02043-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Accepted: 07/26/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE Fracture reduction and fixation of syndesmotic injuries is a common procedure in trauma surgery. An intra-operative evaluation of the surgical outcome is challenging due to high inter-individual anatomical variation. A comparison to the contralateral uninjured ankle would be highly beneficial but would also incur additional radiation and time consumption. In this work, we pioneer automatic contralateral side comparison while avoiding an additional 3D scan. METHODS We reconstruct an accurate 3D surface of the uninjured ankle joint from three low-dose 2D fluoroscopic projections. Through CNN complemented 3D shape model segmentation, we create a reference model of the injured ankle while addressing the issues of metal artifacts and initialization. Following 2D-3D multiple bone reconstruction, a final reference contour can be created and matched to the uninjured ankle for contralateral side comparison without any user interaction. RESULTS The accuracy and robustness of individual workflow steps were assessed using 81 C-arm datasets, with 2D and 3D images available for injured and uninjured ankles. Furthermore, the entire workflow was tested on eleven clinical cases. These experiments showed an overall average Hausdorff distance of [Formula: see text] mm measured at clinical evaluation level. CONCLUSION Reference contours of the contralateral side reconstructed from three projection images can assist surgeons in optimizing reduction results, reducing the duration of radiation exposure and potentially improving postoperative outcomes in the long term.
Collapse
Affiliation(s)
- Sarina Thomas
- Division of Medical Image Computing (E230), German Cancer Research Center, Heidelberg, Germany. .,Medical faculty, University of Heidelberg, Heidelberg, Germany.
| | - Fabian Isensee
- Division of Medical Image Computing (E230), German Cancer Research Center, Heidelberg, Germany
| | - Simon Kohl
- Division of Medical Image Computing (E230), German Cancer Research Center, Heidelberg, Germany
| | - Maxim Privalov
- MINTOS Research group, BG Trauma Center, Ludwigshafen, Germany
| | - Nils Beisemann
- MINTOS Research group, BG Trauma Center, Ludwigshafen, Germany
| | | | - Holger Keil
- MINTOS Research group, BG Trauma Center, Ludwigshafen, Germany
| | - Sven Y Vetter
- MINTOS Research group, BG Trauma Center, Ludwigshafen, Germany
| | - Jochen Franke
- MINTOS Research group, BG Trauma Center, Ludwigshafen, Germany
| | - Paul A Grützner
- MINTOS Research group, BG Trauma Center, Ludwigshafen, Germany
| | - Lena Maier-Hein
- Division of Computer-Assisted Medical Interventions (E130), German Cancer Research Center, Heidelberg, Germany
| | - Marco Nolden
- Division of Medical Image Computing (E230), German Cancer Research Center, Heidelberg, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing (E230), German Cancer Research Center, Heidelberg, Germany
| |
Collapse
|
23
|
A comparative analysis of intensity-based 2D–3D registration for intraoperative use in pedicle screw insertion surgeries. Int J Comput Assist Radiol Surg 2019; 14:1725-1739. [DOI: 10.1007/s11548-019-02024-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 06/26/2019] [Indexed: 10/26/2022]
|
24
|
Banerjee J, Sun Y, Klink C, Gahrmann R, Niessen WJ, Moelker A, van Walsum T. Multiple-correlation similarity for block-matching based fast CT to ultrasound registration in liver interventions. Med Image Anal 2019; 53:132-141. [PMID: 30772666 DOI: 10.1016/j.media.2019.02.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 01/23/2019] [Accepted: 02/07/2019] [Indexed: 11/24/2022]
Abstract
In this work we present a fast approach to perform registration of computed tomography to ultrasound volumes for image guided intervention applications. The method is based on a combination of block-matching and outlier rejection. The block-matching uses a correlation based multimodal similarity metric, where the intensity and the gradient of the computed tomography images along with the ultrasound volumes are the input images to find correspondences between blocks in the computed tomography and the ultrasound volumes. A variance and octree based feature point-set selection method is used for selecting distinct and evenly spread point locations for block-matching. Geometric consistency and smoothness criteria are imposed in an outlier rejection step to refine the block-matching results. The block-matching results after outlier rejection are used to determine the affine transformation between the computed tomography and the ultrasound volumes. Various experiments are carried out to assess the optimal performance and the influence of parameters on accuracy and computational time of the registration. A leave-one-patient-out cross-validation registration error of 3.6 mm is achieved over 29 datasets, acquired from 17 patients.
Collapse
Affiliation(s)
- Jyotirmoy Banerjee
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Yuanyuan Sun
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Camiel Klink
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Renske Gahrmann
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands; Quantitative Imaging Group, Faculty of Technical Physics, Delft University of Technology, The Netherlands
| | - Adriaan Moelker
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Theo van Walsum
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands.
| |
Collapse
|
25
|
Chinnadurai P, Bismuth J. Intraoperative Imaging and Image Fusion for Venous Interventions. Methodist Debakey Cardiovasc J 2018; 14:200-207. [PMID: 30410650 DOI: 10.14797/mdcj-14-3-200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Advanced imaging for intraoperative evaluation of venous pathologies has played an increasingly significant role in this era of evolving minimally invasive surgical and interventional therapies. The evolution of dedicated venous stents and other novel interventional devices has mandated the need for advanced imaging tools to optimize safe and accurate device deployment. Most venous interventions are typically performed using a combination of standard 2-dimensional (2D) fluoroscopy, digital-subtraction angiography, and intravascular ultrasound imaging techniques. Latest generation computer tomography (CT) and magnetic resonance imaging (MRI) scanners have been shown to provide high-resolution 3D and 4D information about venous vasculature. In addition to morphological imaging, novel MRI techniques such as 3D time-resolved MR venography and 4D flow sequences can provide quantitative information and help visualize intricate flow patterns to better understand complex venous pathologies. Moreover, the high-fidelity information from multiple imaging techniques can be integrated using image fusion to overcome the limitations of current intraoperative imaging techniques. For example, the limitations of standard 2D fluoroscopy and luminal angiography can be compensated for by perivascular and soft-tissue information from MRI during complex venous interventions using image fusion techniques. Intraoperative dynamic evaluation of devices such as venous stents and real-time understanding of changes in flow patterns during venous interventions may be routinely available in future interventional suites with integrated multimodality CT or MR imaging capabilities. The purpose of this review is to discuss the outlook for intraoperative imaging and multimodality image fusion techniques and highlight their value during complex venous interventions.
Collapse
Affiliation(s)
| | - Jean Bismuth
- METHODIST DEBAKEY HEART & VASCULAR CENTER, HOUSTON METHODIST HOSPITAL, HOUSTON, TEXAS
| |
Collapse
|
26
|
Reyneke CJF, Luthi M, Burdin V, Douglas TS, Vetter T, Mutsvangwa TEM. Review of 2-D/3-D Reconstruction Using Statistical Shape and Intensity Models and X-Ray Image Synthesis: Toward a Unified Framework. IEEE Rev Biomed Eng 2018; 12:269-286. [PMID: 30334808 DOI: 10.1109/rbme.2018.2876450] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Patient-specific three-dimensional (3-D) bone models are useful for a number of clinical applications such as surgery planning, postoperative evaluation, as well as implant and prosthesis design. Two-dimensional-to-3-D (2-D/3-D) reconstruction, also known as model-to-modality or atlas-based 2-D/3-D registration, provides a means of obtaining a 3-D model of a patient's bones from their 2-D radiographs when 3-D imaging modalities are not available. The preferred approach for estimating both shape and density information (that would be present in a patient's computed tomography data) for 2-D/3-D reconstruction makes use of digitally reconstructed radiographs and deformable models in an iterative, non-rigid, intensity-based approach. Based on a large number of state-of-the-art 2-D/3-D bone reconstruction methods, a unified mathematical formulation of the problem is proposed in a common conceptual framework, using unambiguous terminology. In addition, shortcomings, recent adaptations, and persisting challenges are discussed along with insights for future research.
Collapse
|
27
|
A deep learning framework for segmentation and pose estimation of pedicle screw implants based on C-arm fluoroscopy. Int J Comput Assist Radiol Surg 2018; 13:1269-1282. [PMID: 29808466 DOI: 10.1007/s11548-018-1776-9] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 04/25/2018] [Indexed: 10/14/2022]
Abstract
PURPOSE Pedicle screw fixation is a challenging procedure with a concerning rates of reoperation. After insertion of the screws is completed, the most common intraoperative verification approach is to acquire anterior-posterior and lateral radiographic images, based on which the surgeons try to visually assess the correctness of insertion. Given the limited accuracy of the existing verification techniques, we identified the need for an accurate and automated pedicle screw assessment system that can verify the screw insertion intraoperatively. For doing so, this paper offers a framework for automatic segmentation and pose estimation of pedicle screws based on deep learning principles. METHODS Segmentation of pedicle screw X-ray projections was performed by a convolutional neural network. The network could isolate the input X-rays into three classes: screw head, screw shaft and background. Once all the screw shafts were segmented, knowledge about the spatial configuration of the acquired biplanar X-rays was used to identify the correspondence between the projections. Pose estimation was then performed to estimate the 6 degree-of-freedom pose of each screw. The performance of the proposed pose estimation method was tested on a porcine specimen. RESULTS The developed machine learning framework was capable of segmenting the screw shafts with 93% and 83% accuracy when tested on synthetic X-rays and on clinically realistic X-rays, respectively. The pose estimation accuracy of this method was shown to be [Formula: see text] and [Formula: see text] on clinically realistic X-rays. CONCLUSIONS The proposed system offers an accurate and fully automatic pedicle screw segmentation and pose assessment framework. Such a system can help to provide an intraoperative pedicle screw insertion assessment protocol with minimal interference with the existing surgical routines.
Collapse
|
28
|
Maurel B, Martin-Gonzalez T, Chong D, Irwin A, Guimbretière G, Davis M, Mastracci TM. A prospective observational trial of fusion imaging in infrarenal aneurysms. J Vasc Surg 2018; 68:1706-1713.e1. [PMID: 29804734 DOI: 10.1016/j.jvs.2018.04.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2017] [Accepted: 04/04/2018] [Indexed: 01/21/2023]
Abstract
OBJECTIVE Use of three-dimensional fusion has been shown to significantly reduce radiation exposure and contrast material use in complex (fenestrated and branched) endovascular aneurysm repair (EVAR). Cydar software (CYDAR Medical, Cambridge, United Kingdom) is a cloud-based technology that can provide imaging guidance by overlaying preoperative three-dimensional vessel anatomy from computed tomography scans onto live fluoroscopy images both in hybrid operating rooms and on mobile C-arms. The aim of this study was to determine whether radiation dose reduction would occur with the addition of fusion imaging to infrarenal repair in all imaging environments. METHODS All patients who consented to involvement in the trial and who were treated with EVAR in our center from March 2016 until April 2017 were included. A teaching session about radiation protection and Cydar fusion software use was provided to all operators before the start of the fusion group enrollment. This group was compared with a retrospective cohort of patients treated in the same center from March 2015 to March 2016, after a dedicated program of radiation awareness and reduction was introduced. Ruptured aneurysms and complex EVAR were excluded. Preoperative and perioperative characteristics were recorded, including parameters of radiation dose, such as air kerma and dose-area product. Results were expressed in median and interquartile range. RESULTS Forty-four patients were prospectively enrolled and compared with 21 retrospective control patients. No significant differences were found in comparing sex, body mass index, and age at repair. The median operation time (wire to wire) and fluoroscopy time were 90 (75-105) minutes and 30 (22-34) minutes, respectively, without significant differences between groups (P = .56 and P = .36). Dose-area product was nonsignificantly higher in the control group, 21.7 (8.9-85.9) Gy cm2, compared with the fusion group, 12.4 (7.5-23.4) Gy cm2 (P = .10). Air kerma product was significantly higher in the control group, 142 (61-541) mGy, compared with 82 (51-115) mGy in the fusion group (P = .03). The number of digital subtraction angiography runs was significantly lower in the fusion group (8 [6-11]) compared with the control group (10 [9-14]); (P = .03). There were no significant differences in the frequency of adverse events, endoleaks, or additional procedures required. CONCLUSIONS When it is used in simple procedures such as infrarenal aneurysm repair, image-based fusion technology is feasible both in hybrid operating rooms and on mobile systems and leads to an overall 50% reduction in radiation dose. Fusion technology should become standard of care for centers attempting to maximize radiation dose reduction, even if capital investment of a hybrid operating room is not feasible.
Collapse
Affiliation(s)
- Blandine Maurel
- Aortic Team, Vascular Surgery Directorate, The Royal Free London, London, United Kingdom; Department of Vascular Surgery, Institut du Thorax, CHU Nantes, Nantes, France
| | - Teresa Martin-Gonzalez
- Aortic Team, Vascular Surgery Directorate, The Royal Free London, London, United Kingdom
| | - Debra Chong
- Aortic Team, Vascular Surgery Directorate, The Royal Free London, London, United Kingdom
| | - Andrew Irwin
- Aortic Team, Vascular Surgery Directorate, The Royal Free London, London, United Kingdom
| | | | - Meryl Davis
- Aortic Team, Vascular Surgery Directorate, The Royal Free London, London, United Kingdom
| | - Tara M Mastracci
- Aortic Team, Vascular Surgery Directorate, The Royal Free London, London, United Kingdom.
| |
Collapse
|
29
|
Ketcha MD, De Silva T, Uneri A, Jacobson MW, Goerres J, Kleinszig G, Vogt S, Wolinsky JP, Siewerdsen JH. Multi-stage 3D-2D registration for correction of anatomical deformation in image-guided spine surgery. Phys Med Biol 2017; 62:4604-4622. [PMID: 28375139 PMCID: PMC5755708 DOI: 10.1088/1361-6560/aa6b3e] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
A multi-stage image-based 3D-2D registration method is presented that maps annotations in a 3D image (e.g. point labels annotating individual vertebrae in preoperative CT) to an intraoperative radiograph in which the patient has undergone non-rigid anatomical deformation due to changes in patient positioning or due to the intervention itself. The proposed method (termed msLevelCheck) extends a previous rigid registration solution (LevelCheck) to provide an accurate mapping of vertebral labels in the presence of spinal deformation. The method employs a multi-stage series of rigid 3D-2D registrations performed on sets of automatically determined and increasingly localized sub-images, with the final stage achieving a rigid mapping for each label to yield a locally rigid yet globally deformable solution. The method was evaluated first in a phantom study in which a CT image of the spine was acquired followed by a series of 7 mobile radiographs with increasing degree of deformation applied. Second, the method was validated using a clinical data set of patients exhibiting strong spinal deformation during thoracolumbar spine surgery. Registration accuracy was assessed using projection distance error (PDE) and failure rate (PDE > 20 mm-i.e. label registered outside vertebra). The msLevelCheck method was able to register all vertebrae accurately for all cases of deformation in the phantom study, improving the maximum PDE of the rigid method from 22.4 mm to 3.9 mm. The clinical study demonstrated the feasibility of the approach in real patient data by accurately registering all vertebral labels in each case, eliminating all instances of failure encountered in the conventional rigid method. The multi-stage approach demonstrated accurate mapping of vertebral labels in the presence of strong spinal deformation. The msLevelCheck method maintains other advantageous aspects of the original LevelCheck method (e.g. compatibility with standard clinical workflow, large capture range, and robustness against mismatch in image content) and extends capability to cases exhibiting strong changes in spinal curvature.
Collapse
Affiliation(s)
- M D Ketcha
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Pose-aware C-arm for automatic re-initialization of interventional 2D/3D image registration. Int J Comput Assist Radiol Surg 2017; 12:1221-1230. [PMID: 28527025 DOI: 10.1007/s11548-017-1611-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Accepted: 05/08/2017] [Indexed: 12/25/2022]
Abstract
PURPOSE In minimally invasive interventions assisted by C-arm imaging, there is a demand to fuse the intra-interventional 2D C-arm image with pre-interventional 3D patient data to enable surgical guidance. The commonly used intensity-based 2D/3D registration has a limited capture range and is sensitive to initialization. We propose to utilize an opto/X-ray C-arm system which allows to maintain the registration during intervention by automating the re-initialization for the 2D/3D image registration. Consequently, the surgical workflow is not disrupted and the interaction time for manual initialization is eliminated. METHODS We utilize two distinct vision-based tracking techniques to estimate the relative poses between different C-arm arrangements: (1) global tracking using fused depth information and (2) RGBD SLAM system for surgical scene tracking. A highly accurate multi-view calibration between RGBD and C-arm imaging devices is achieved using a custom-made multimodal calibration target. RESULTS Several in vitro studies are conducted on pelvic-femur phantom that is encased in gelatin and covered with drapes to simulate a clinically realistic scenario. The mean target registration errors (mTRE) for re-initialization using depth-only and RGB [Formula: see text] depth are 13.23 mm and 11.81 mm, respectively. 2D/3D registration yielded 75% success rate using this automatic re-initialization, compared to a random initialization which yielded only 23% successful registration. CONCLUSION The pose-aware C-arm contributes to the 2D/3D registration process by globally re-initializing the relationship of C-arm image and pre-interventional CT data. This system performs inside-out tracking, is self-contained, and does not require any external tracking devices.
Collapse
|
31
|
Wang ZJ. A CNN Regression Approach for Real-Time 2D/3D Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1352-1363. [PMID: 26829785 DOI: 10.1109/tmi.2016.2521800] [Citation(s) in RCA: 193] [Impact Index Per Article: 24.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods.
Collapse
|
32
|
Chan B, Auyeung J, Rudan JF, Ellis RE, Kunz M. Intraoperative application of hand-held structured light scanning: a feasibility study. Int J Comput Assist Radiol Surg 2016; 11:1101-8. [PMID: 27017498 DOI: 10.1007/s11548-016-1381-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 03/07/2016] [Indexed: 11/29/2022]
Abstract
PURPOSE Structured light scanning is an emerging technology that shows potential in the field of medical imaging and image-guided surgery. The purpose of this study was to investigate the feasibility of applying a hand-held structured light scanner in the operating theatre as an intraoperative image modality and registration tool. METHODS We performed an in vitro study with three fresh frozen knee specimens and a clinical pilot study with three patients (one total knee arthroplasty and two hip replacements). Before the procedure, a CT scan of the affected joint was obtained and isosurface models of the anatomies were created. A conventional surgical exposure was performed, and a hand-held structured light scanner (Artec Group, Palo Alto, USA) was used to scan the exposed anatomy. Using the texture information of the scanned model, bony anatomy was selected and registered to the CT models. Registration RMS errors were documented, and distance maps between the scanned model and the CT model were created. RESULTS For the in vitro trial, the average RMS error was 1.00 mm for the femur and 1.17 mm for the tibia registration. We found comparable results during clinical trials, with an average RMS error of 1.3 mm. CONCLUSIONS The results of this preliminary study indicate that structured light scanning could be applied accurately and safely in a surgical environment. This could result in a variety of applications for these scanners in image-guided interventions as intraoperative imaging and registration tools.
Collapse
Affiliation(s)
- Brandon Chan
- School of Computing, Queen's University, 557 Goodwin Hall, Kingston, ON, K7L 2N8, Canada
| | - Jason Auyeung
- Department of Biomedical and Molecular Sciences, Queen's University, Botterell Hall, 18 Stuart Street, Kingston, ON, K7L 3N6, Canada
| | - John F Rudan
- Department of Surgery, Queen's University, Kingston General Hospital, 76 Stuart Street, Kingston, ON, K7L 2V7, Canada
| | - Randy E Ellis
- School of Computing, Queen's University, 557 Goodwin Hall, Kingston, ON, K7L 2N8, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Botterell Hall, 18 Stuart Street, Kingston, ON, K7L 3N6, Canada.,Department of Surgery, Queen's University, Kingston General Hospital, 76 Stuart Street, Kingston, ON, K7L 2V7, Canada.,Department of Mechanical and Materials Engineering, Queen's University, McLaughlin Hall, Kingston, ON, K7L 3N6, Canada
| | - Manuela Kunz
- School of Computing, Queen's University, 557 Goodwin Hall, Kingston, ON, K7L 2N8, Canada. .,Human Mobility Research Centre, Queen's University and Kingston General Hospital, Kingston General Hospital, 76 Stuart Street, Kingston, Ontario, K7L 2V7, Canada.
| |
Collapse
|