1
|
Völk C, Bernhard L, Völk D, Weiten M, Wilhelm D, Biberthaler P. [Mobile C-arm-Radiation exposure and workflow killer? : Potential of an innovative assistance system for intraoperative positioning]. UNFALLCHIRURGIE (HEIDELBERG, GERMANY) 2023; 126:928-934. [PMID: 37878125 DOI: 10.1007/s00113-023-01380-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/28/2023] [Indexed: 10/26/2023]
Abstract
Despite its versatile applicability the intraoperative use of a mobile C‑arm is often problematic and potentially associated with increased radiation exposure for both the patient and the personnel. In particular, the correct positioning for adequate imaging can become a problem as the nonsterile circulating nurse has to coordinate the various maneuvers together with the surgeon without having a good view of the surgical field. The sluggishness of the equipment and the intraoperative setting (sterile borders, additional hardware, etc.) pose further challenges. A light detection and ranging (LIDAR)-based assistance system shows promise to provide accurate and intuitive repositioning support as part of an initial series of experimental trials. For this purpose, the sensors are attached to the C‑arm base unit and enable navigation of the device in the operating room to a stored target position using a simultaneous localization and mapping (SLAM) algorithm. An improvement of the workflow as well as a reduction of radiation exposure represent the possible potential of this system. The advantages over other experimental approaches are the lack of external hardware and the ease of use without isolating the operator from the rest of the operating room environment; however, the suitability for daily use in the presence of additional interfering factors should be verified in further studies.
Collapse
Affiliation(s)
- Christopher Völk
- Klinik und Poliklinik für Unfallchirurgie, Klinikum rechts der Isar der TU München, Ismaningerstr. 22, 81675, München, Deutschland.
| | - Lukas Bernhard
- Forschungsgruppe MITI, Klinikum rechts der Isar der TU München, München, Deutschland
| | - Dominik Völk
- Klinik und Poliklinik für Unfallchirurgie, Klinikum rechts der Isar der TU München, Ismaningerstr. 22, 81675, München, Deutschland
| | | | - Dirk Wilhelm
- Forschungsgruppe MITI, Klinikum rechts der Isar der TU München, München, Deutschland
- Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar der TU München, München, Deutschland
| | - Peter Biberthaler
- Klinik und Poliklinik für Unfallchirurgie, Klinikum rechts der Isar der TU München, Ismaningerstr. 22, 81675, München, Deutschland
| |
Collapse
|
2
|
Kim JY, Lee JS, Lee JH, Park YS, Cho J, Koh JC. Virtual reality simulator's effectiveness on the spine procedure education for trainee: a randomized controlled trial. Korean J Anesthesiol 2023; 76:213-226. [PMID: 36323305 DOI: 10.4097/kja.22491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 10/30/2022] [Indexed: 06/02/2023] Open
Abstract
BACKGROUND Since the onset of the coronavirus disease 2019 pandemic, virtual simulation has emerged as an alternative to traditional teaching methods as it can be employed within the recently established contact-minimizing guidelines. This prospective education study aimed to develop a virtual reality simulator for a lumbar transforaminal epidural block (LTFEB) and demonstrate its efficacy. METHODS We developed a virtual reality simulator using patient image data processing, virtual X-ray generation, spatial registration, and virtual reality technology. For a realistic virtual environment, a procedure room, surgical table, C-arm, and monitor were created. Using the virtual C-arm, the X-ray images of the patient's anatomy, the needle, and indicator were obtained in real-time. After the simulation, the trainees could receive feedback by adjusting the visibility of structures such as skin and bones. The training of LTFEB using the simulator was evaluated using 20 inexperienced trainees. The trainees' procedural time, rating score, number of C-arm taken, and overall satisfaction were recorded as primary outcomes. RESULTS The group using the simulator showed a higher global rating score (P = 0.014), reduced procedural time (P = 0.025), reduced number of C-arm uses (P = 0.001), and higher overall satisfaction score (P = 0.007). CONCLUSIONS We created an accessible and effective virtual reality simulator that can be used to teach inexperienced trainees LTFEB without radiation exposure. The results of this study indicate that the proposed simulator will prove to be a useful aid for teaching LTFEB.
Collapse
Affiliation(s)
- Ji Yeong Kim
- Department of Anesthesiology and Pain Medicine, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, Seoul, Korea
| | - Jong Seok Lee
- Department of Anesthesiology and Pain Medicine, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, Seoul, Korea
| | - Jae Hee Lee
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| | - Yoon Sun Park
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| | - Jaein Cho
- Department of Anesthesiology and Pain Medicine, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, Seoul, Korea
| | - Jae Chul Koh
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| |
Collapse
|
3
|
Seong H, Yun D, Yoon KS, Kwak JS, Koh JC. Development of pre-procedure virtual simulation for challenging interventional procedures: an experimental study with clinical application. Korean J Pain 2022; 35:403-412. [PMID: 36175339 PMCID: PMC9530692 DOI: 10.3344/kjp.2022.35.4.403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 06/24/2022] [Accepted: 07/12/2022] [Indexed: 11/30/2022] Open
Abstract
Background Most pain management techniques for challenging procedures are still performed under the guidance of the C-arm fluoroscope although it is sometimes difficult for even experienced clinicians to understand the modified three-dimensional anatomy as a two-dimensional X-ray image. To overcome these difficulties, the development of a virtual simulator may be helpful. Therefore, in this study, the authors developed a virtual simulator and presented its clinical application cases. Methods We developed a computer program to simulate the actual environment of the procedure. Computed tomography (CT) Digital Imaging and Communications in Medicine (DICOM) data were used for the simulations. Virtual needle placement was simulated at the most appropriate position for a successful block. Using a virtual C-arm, the authors searched for the position of the C-arm at which the needle was visualized as a point. The positional relationships between the anatomy of the patient and the needle were identified. Results For the simulations, the CT DICOM data of patients who visited the outpatient clinic was used. When the patients revisited the clinic, images similar to the simulated images were obtained by manipulating the C-arm. Transforaminal epidural injection, which was difficult to perform due to severe spinal deformity, and the challenging procedures of the superior hypogastric plexus block and Gasserian ganglion block, were successfully performed with the help of the simulation. Conclusions We created a pre-procedural virtual simulation and demonstrated its successful application in patients who are expected to undergo challenging procedures.
Collapse
Affiliation(s)
- Hyunyoung Seong
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Seoul, Korea
| | - Daehun Yun
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Seoul, Korea
| | - Kyung Seob Yoon
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Seoul, Korea
| | - Ji Soo Kwak
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Seoul, Korea
| | - Jae Chul Koh
- Department of Anesthesiology and Pain Medicine, Korea University Anam Hospital, Seoul, Korea
| |
Collapse
|
4
|
Bernhard L, Völk C, Völk D, Rothmeyer F, Xu Z, Ostler D, Biberthaler P, Wilhelm D. RAY-POS: a LIDAR-based assistance system for intraoperative repositioning of mobile C-arms without external aids. Int J Comput Assist Radiol Surg 2022; 17:719-729. [PMID: 35195830 PMCID: PMC8948129 DOI: 10.1007/s11548-022-02571-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 01/26/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE In current clinical practice, intraoperative repositioning of mobile C-arms is challenging due to a lack of visual cues and efficient guiding tools. This can be detrimental to the surgical workflow and lead to additional radiation burdens for both patient and personnel. To overcome this problem, we present our novel approach Lidar-based X-ray Positioning for Mobile C-arms (RAY-POS) for assisting circulating nurses during intraoperative C-arm repositioning without requiring external aids. METHODS RAY-POS consists of a localization module and a graphical user interface for guiding the user back to a previously recorded C-Arm position. We conducted a systematic comparison of simultaneous localization and mapping (SLAM) algorithms using different attachment positions of light detection and ranging (LIDAR) sensors to benchmark localization performance within the operating room (OR). For two promising combinations, we conducted further end-to-end repositioning tests within a realistic OR setup. RESULTS SLAM algorithm gmapping with a LIDAR sensor mounted 40 cm above the C-arm's horizontal unit performed best regarding localization accuracy and long-term stability. The distribution of the repositioning error yielded an effective standard deviation of 7.61 mm. CONCLUSION We conclude that a proof-of-concept for LIDAR-based C-arm repositioning without external aids has been achieved. In future work, we mainly aim at extending the capabilities of our system and evaluating the usability together with clinicians.
Collapse
Affiliation(s)
- Lukas Bernhard
- Klinikum Rechts Der Isar der Technischen Universität München, Research Group MITI, Munich, Germany.
| | - Christopher Völk
- Department of Trauma Surgery, Klinikum Rechts Der Isar der Technischen Universität München, Munich, Germany
| | - Dominik Völk
- Department of Trauma Surgery, Klinikum Rechts Der Isar der Technischen Universität München, Munich, Germany
| | - Florian Rothmeyer
- Technische Universität München, Chair of Materials Handling, Material Flow, Logistics, Munich, Germany
| | - Zhencan Xu
- Klinikum Rechts Der Isar der Technischen Universität München, Research Group MITI, Munich, Germany
| | - Daniel Ostler
- Klinikum Rechts Der Isar der Technischen Universität München, Research Group MITI, Munich, Germany
| | - Peter Biberthaler
- Department of Trauma Surgery, Klinikum Rechts Der Isar der Technischen Universität München, Munich, Germany
| | - Dirk Wilhelm
- Klinikum Rechts Der Isar der Technischen Universität München, Research Group MITI, Munich, Germany
- Department of Surgery, Klinikum Rechts Der Isar der Technischen Universität München, Munich, Germany
| |
Collapse
|
5
|
Kausch L, Thomas S, Kunze H, Privalov M, Vetter S, Franke J, Mahnken AH, Maier-Hein L, Maier-Hein K. Toward automatic C-arm positioning for standard projections in orthopedic surgery. Int J Comput Assist Radiol Surg 2020; 15:1095-1105. [PMID: 32533315 PMCID: PMC8286958 DOI: 10.1007/s11548-020-02204-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 05/27/2020] [Indexed: 12/31/2022]
Abstract
Purpose Guidance and quality control in orthopedic surgery increasingly rely on intra-operative fluoroscopy using a mobile C-arm. The accurate acquisition of standardized and anatomy-specific projections is essential in this process. The corresponding iterative positioning of the C-arm is error prone and involves repeated manual acquisitions or even continuous fluoroscopy. To reduce time and radiation exposure for patients and clinical staff and to avoid errors in fracture reduction or implant placement, we aim at guiding—and in the long-run automating—this procedure. Methods In contrast to the state of the art, we tackle this inherently ill-posed problem without requiring patient-individual prior information like preoperative computed tomography (CT) scans, without the need of registration and without requiring additional technical equipment besides the projection images themselves. We propose learning the necessary anatomical hints for efficient C-arm positioning from in silico simulations, leveraging masses of 3D CTs. Specifically, we propose a convolutional neural network regression model that predicts 5 degrees of freedom pose updates directly from a first X-ray image. The method is generalizable to different anatomical regions and standard projections. Results Quantitative and qualitative validation was performed for two clinical applications involving two highly dissimilar anatomies, namely the lumbar spine and the proximal femur. Starting from one initial projection, the mean absolute pose error to the desired standard pose is iteratively reduced across different anatomy-specific standard projections. Acquisitions of both hip joints on 4 cadavers allowed for an evaluation on clinical data, demonstrating that the approach generalizes without retraining. Conclusion Overall, the results suggest the feasibility of an efficient deep learning-based automated positioning procedure, which is trained on simulations. Our proposed 2-stage approach for C-arm positioning significantly improves accuracy on synthetic images. In addition, we demonstrated that learning based on simulations translates to acceptable performance on real X-rays.
Collapse
Affiliation(s)
- Lisa Kausch
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany.
| | - Sarina Thomas
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Holger Kunze
- Imaging and Therapy Systems Division, Siemens Healthineers, Erlangen, Germany
| | - Maxim Privalov
- Medical Imaging and Navigation in Trauma and Orthopedic Suregery Research group, BG Trauma Center, Ludwigshafen, Germany
| | - Sven Vetter
- Medical Imaging and Navigation in Trauma and Orthopedic Suregery Research group, BG Trauma Center, Ludwigshafen, Germany
| | - Jochen Franke
- Medical Imaging and Navigation in Trauma and Orthopedic Suregery Research group, BG Trauma Center, Ludwigshafen, Germany
| | - Andreas H Mahnken
- Division of Diagnostic and Interventional Radiology, Universitätsklinikum Marburg, Marburg, Germany
| | - Lena Maier-Hein
- Division of Computer Assisted Medical Interventions, German Cancer Research Center, Heidelberg, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| |
Collapse
|
6
|
Unberath M, Zaech JN, Gao C, Bier B, Goldmann F, Lee SC, Fotouhi J, Taylor R, Armand M, Navab N. Enabling machine learning in X-ray-based procedures via realistic simulation of image formation. Int J Comput Assist Radiol Surg 2019; 14:1517-1528. [PMID: 31187399 PMCID: PMC7297499 DOI: 10.1007/s11548-019-02011-2] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 06/03/2019] [Indexed: 12/19/2022]
Abstract
PURPOSE Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.
Collapse
Affiliation(s)
- Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA.
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.
| | - Jan-Nico Zaech
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Bastian Bier
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Florian Goldmann
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Sing Chun Lee
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Javad Fotouhi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Russell Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | - Nassir Navab
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|