1
|
Ma Y, He J, Tan D, Han X, Feng R, Xiong H, Peng X, Pu X, Zhang L, Li Y, Chen S. The clinical and imaging data fusion model for single-period cerebral CTA collateral circulation assessment. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:953-971. [PMID: 38820061 DOI: 10.3233/xst-240083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
BACKGROUND The Chinese population ranks among the highest globally in terms of stroke prevalence. In the clinical diagnostic process, radiologists utilize computed tomography angiography (CTA) images for diagnosis, enabling a precise assessment of collateral circulation in the brains of stroke patients. Recent studies frequently combine imaging and machine learning methods to develop computer-aided diagnostic algorithms. However, in studies concerning collateral circulation assessment, the extracted imaging features are primarily composed of manually designed statistical features, which exhibit significant limitations in their representational capacity. Accurately assessing collateral circulation using image features in brain CTA images still presents challenges. METHODS To tackle this issue, considering the scarcity of publicly accessible medical datasets, we combined clinical data with imaging data to establish a dataset named RadiomicsClinicCTA. Moreover, we devised two collateral circulation assessment models to exploit the synergistic potential of patients' clinical information and imaging data for a more accurate assessment of collateral circulation: data-level fusion and feature-level fusion. To remove redundant features from the dataset, we employed Levene's test and T-test methods for feature pre-screening. Subsequently, we performed feature dimensionality reduction using the LASSO and random forest algorithms and trained classification models with various machine learning algorithms on the data-level fusion dataset after feature engineering. RESULTS Experimental results on the RadiomicsClinicCTA dataset demonstrate that the optimized data-level fusion model achieves an accuracy and AUC value exceeding 86%. Subsequently, we trained and assessed the performance of the feature-level fusion classification model. The results indicate the feature-level fusion classification model outperforms the optimized data-level fusion model. Comparative experiments show that the fused dataset better differentiates between good and bad side branch features relative to the pure radiomics dataset. CONCLUSIONS Our study underscores the efficacy of integrating clinical and imaging data through fusion models, significantly enhancing the accuracy of collateral circulation assessment in stroke patients.
Collapse
Affiliation(s)
- Yuqi Ma
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Jingliu He
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Duo Tan
- The Second People's Hospital of Guizhou Province, Guizhou, China
| | - Xu Han
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Ruiqi Feng
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Hailing Xiong
- College of Electronic and Information Engineering, Southwest University, Chongqing, China
| | - Xihua Peng
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Xun Pu
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Lin Zhang
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Shanxiong Chen
- College of Computer and Information Science, Southwest University, Chongqing, China
- Big Data & Intelligence Engineering School, Chongqing College of International Business and Economics, Chongqing, China
| |
Collapse
|
2
|
Tan D, Liu J, Chen S, Yao R, Li Y, Zhu S, Li L. Automatic Evaluating of Multi-Phase Cranial CTA Collateral Circulation Based on Feature Fusion Attention Network Model. IEEE Trans Nanobioscience 2023; 22:789-799. [PMID: 37276106 DOI: 10.1109/tnb.2023.3283049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Stroke is one of the main causes of disability and death, and it can be divided into hemorrhagic stroke and ischemic stroke. Ischemic stroke is more common, and about 8 out of 10 stroke patients suffer from ischemic stroke. In clinical practice, doctors diagnose stroke by using computed tomography angiography (CTA) image to accurately evaluate the collateral circulation in stroke patients. This imaging information is of great significance in assisting doctors to determine the patient's treatment plan and prognosis. Currently, great progress has been made in the field of computer-aided diagnosis technology in medicine by using artificial intelligence. However, in related research based on deep learning algorithms, researchers usually only use single-phase data for training, lacking the temporal dimension information of multi-phase image data. This makes it difficult for the model to learn more comprehensive and effective collateral circulation feature representation, thereby limiting its performance. Therefore, combining data for training is expected to improve the accuracy and reliability of collateral circulation evaluation. In this study, we propose an effective hybrid mechanism to assist the feature encoding network in evaluating the degree of collateral circulation in the brain. By using a hybrid attention mechanism, additional guidance and regularization are provided to enhance the collateral circulation feature representation across multiple stages. Time dimension information is added to the input, and multiple feature-level fusion modules are designed in the multi-branch network. The first fusion module in the single-stage feature extraction network completes the fusion of deep and shallow vessel features in the single-branch network, followed by the multi-stage network feature fusion module, which achieves feature fusion for four stages. Tested on a dataset of multi-phase cranial CTA images, the accuracy rate exceeding 90.43%. The experimental results demonstrate that the addition of these modules can fully explore collateral vessel features, improve feature expression capabilities, and optimize the performance of deep learning network model.
Collapse
|
3
|
Phantom study on surgical performance in augmented reality laparoscopy. Int J Comput Assist Radiol Surg 2022:10.1007/s11548-022-02809-7. [PMID: 36547767 PMCID: PMC10363058 DOI: 10.1007/s11548-022-02809-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 12/06/2022] [Indexed: 12/24/2022]
Abstract
Abstract
Purpose
Only a few studies have evaluated Augmented Reality (AR) in in vivo simulations compared to traditional laparoscopy; further research is especially needed regarding the most effective AR visualization technique. This pilot study aims to determine, under controlled conditions on a 3D-printed phantom, whether an AR laparoscope improves surgical outcomes over conventional laparoscopy without augmentation.
Methods
We selected six surgical residents at a similar level of training and had them perform a laparoscopic task. The participants repeated the experiment three times, using different 3D phantoms and visualizations: Floating AR, Occlusion AR, and without any AR visualization (Control). Surgical performance was determined using objective measurements. Subjective measures, such as task load and potential application areas, were collected with questionnaires.
Results
Differences in operative time, total touching time, and SurgTLX scores showed no statistical significance ($$p>0.05$$
p
>
0.05
). However, when assessing the invasiveness of the simulated intervention, the comparison revealed a statistically significant difference ($$p=0.009$$
p
=
0.009
). Participants felt AR could be useful for various surgeries, especially for liver, sigmoid, and pancreatic resections (100%). Almost all participants agreed that AR could potentially lead to improved surgical parameters, such as operative time (83%), complication rate (83%), and identifying risk structures (83%).
Conclusion
According to our results, AR may have great potential in visceral surgery and based on the objective measures of the study, may improve surgeons' performance in terms of an atraumatic approach. In this pilot study, participants consistently took more time to complete the task, had more contact with the vascular tree, were significantly more invasive, and scored higher on the SurgTLX survey than with AR.
Collapse
|
4
|
Durrani S, Onyedimma C, Jarrah R, Bhatti A, Nathani KR, Bhandarkar AR, Mualem W, Ghaith AK, Zamanian C, Michalopoulos GD, Alexander AY, Jean W, Bydon M. The Virtual Vision of Neurosurgery: How Augmented Reality and Virtual Reality are Transforming the Neurosurgical Operating Room. World Neurosurg 2022; 168:190-201. [DOI: 10.1016/j.wneu.2022.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 09/30/2022] [Accepted: 10/01/2022] [Indexed: 11/22/2022]
|
5
|
Sakano Y, Ando H. Conditions of a Multi-View 3D Display for Accurate Reproduction of Perceived Glossiness. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3336-3350. [PMID: 33651695 DOI: 10.1109/tvcg.2021.3063182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Visualizing objects as they are perceived in the real world is often critical in our daily experiences. We previously focused on objects' surface glossiness visualized with a 3D display and found that a multi-view 3D display reproduces perceived glossiness more accurately than a 2D display. This improvement of glossiness reproduction can be explained by the fact that a glossy surface visualized by a multi-view 3D display appropriately provides luminance differences between the two eyes and luminance changes accompanying the viewer's lateral head motion. In the present study, to determine the requirements of a multi-view 3D display for the accurate reproduction of perceived glossiness, we developed a simulator of a multi-view 3D display to independently and simultaneously manipulate the viewpoint interval and the magnitude of the optical inter-view crosstalk. Using the simulator, we conducted a psychophysical experiment and found that glossiness reproduction is most accurate when the viewpoint interval is small and there is just a small (but not too small) amount of crosstalk. We proposed a simple yet perceptually valid model that quantitatively predicts the reproduction accuracy of perceived glossiness.
Collapse
|
6
|
Kreiser J, Hermosilla P, Ropinski T. Void Space Surfaces to Convey Depth in Vessel Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3913-3925. [PMID: 32406840 DOI: 10.1109/tvcg.2020.2993992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
To enhance depth perception and thus data comprehension, additional depth cues are often used in 3D visualizations of complex vascular structures. There is a variety of different approaches described in the literature, ranging from chromadepth color coding over depth of field to glyph-based encodings. Unfortunately, the majority of existing approaches suffers from the same problem: As these cues are directly applied to the geometry's surface, the display of additional information on the vessel wall, such as other modalities or derived attributes, is impaired. To overcome this limitation we propose Void Space Surfaces which utilizes empty space in between vessel branches to communicate depth and their relative positioning. This allows us to enhance the depth perception of vascular structures without interfering with the spatial data and potentially superimposed parameter information. With this article, we introduce Void Space Surfaces, describe their technical realization, and show their application to various vessel trees. Moreover, we report the outcome of two user studies which we have conducted in order to evaluate the perceptual impact of Void Space Surfaces compared to existing vessel visualization techniques and discuss expert feedback.
Collapse
|
7
|
Meola A, Chang SD. Letter: Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience. Oper Neurosurg (Hagerstown) 2019; 14:E71-E72. [PMID: 29590481 DOI: 10.1093/ons/opy048] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
- Antonio Meola
- Department of Neurosurgery Stanford University Stanford, California
| | - Steven D Chang
- Department of Neurosurgery Stanford University Stanford, California
| |
Collapse
|
8
|
Lichtenberg N, Lawonn K. Auxiliary Tools for Enhanced Depth Perception in Vascular Structures. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1138:103-113. [DOI: 10.1007/978-3-030-14227-8_8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
9
|
Drouin S, DiGiovanni DA, Kersten-Oertel MA, Collins L. Interaction driven enhancement of depth perception in angiographic volumes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 26:2247-2257. [PMID: 30530366 DOI: 10.1109/tvcg.2018.2884940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
User interaction has the potential to greatly facilitate the exploration and understanding of 3D medical images for diagnosis and treatment. However, in certain specialized environments such as in an operating room (OR), technical and physical constraints such as the need to enforce strict sterility rules, make interaction challenging. In this paper, we propose to facilitate the intraoperative exploration of angiographic volumes by leveraging the motion of a tracked surgical pointer, a tool that is already manipulated by the surgeon when using a navigation system in the OR. We designed and implemented three interactive rendering techniques based on this principle. The benefit of each of these techniques is compared to its non-interactive counterpart in a psychophysics experiment where 20 medical imaging experts were asked to perform a reaching/targeting task while visualizing a 3D volume of angiographic data. The study showed a significant improvement of the appreciation of local vascular structure when using dynamic techniques, while not having a negative impact on the appreciation of the global structure and only a marginal impact on the execution speed. A qualitative evaluation of the different techniques showed a preference for dynamic chroma-depth in accordance with the objective metrics but a discrepancy between objective and subjective measures for dynamic aerial perspective and shading.
Collapse
|
10
|
Abstract
Augmentation reality technology offers virtual information in addition to that of the real environment and thus opens new possibilities in various fields. The medical applications of augmentation reality are generally concentrated on surgery types, including neurosurgery, laparoscopic surgery and plastic surgery. Augmentation reality technology is also widely used in medical education and training. In dentistry, oral and maxillofacial surgery is the primary area of use, where dental implant placement and orthognathic surgery are the most frequent applications. Recent technological advancements are enabling new applications of restorative dentistry, orthodontics and endodontics. This review briefly summarizes the history, definitions, features, and components of augmented reality technology and discusses its applications and future perspectives in dentistry.
Collapse
Affiliation(s)
- Ho-Beom Kwon
- Department of Prosthodontics, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| | - Young-Seok Park
- Department of Oral Medicine and Oral Diagnosis, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| | - Jung-Suk Han
- Department of Prosthodontics, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| |
Collapse
|
11
|
Mewes A, Heinrich F, Hensen B, Wacker F, Lawonn K, Hansen C. Concepts for augmented reality visualisation to support needle guidance inside the MRI. Healthc Technol Lett 2018; 5:172-176. [PMID: 30464849 PMCID: PMC6222244 DOI: 10.1049/htl.2018.5076] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2018] [Accepted: 08/20/2018] [Indexed: 11/20/2022] Open
Abstract
During MRI-guided interventions, navigation support is often separated from the operating field on displays, which impedes the interpretation of positions and orientations of instruments inside the patient's body as well as hand–eye coordination. To overcome these issues projector-based augmented reality can be used to support needle guidance inside the MRI bore directly in the operating field. The authors present two visualisation concepts for needle navigation aids which were compared in an accuracy and usability study with eight participants, four of whom were experienced radiologists. The results show that both concepts are equally accurate (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{upgreek}
\usepackage{mathrsfs}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
}{}$2.0 \pm 0.6$\end{document}2.0±0.6 and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{upgreek}
\usepackage{mathrsfs}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
}{}$1.7 \pm 0.5\, {\rm mm}$\end{document}1.7±0.5mm), useful and easy to use, with clear visual feedback about the state and success of the needle puncture. For easier clinical applicability, a dynamic projection on moving surfaces and organ movement tracking are needed. For now, tests with patients with respiratory arrest are feasible.
Collapse
Affiliation(s)
- André Mewes
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Germany.,Research Campus STIMULATE, Otto-von-Guericke University Magdeburg, Germany
| | - Florian Heinrich
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Germany.,Research Campus STIMULATE, Otto-von-Guericke University Magdeburg, Germany
| | - Bennet Hensen
- Research Campus STIMULATE, Otto-von-Guericke University Magdeburg, Germany.,Institute of Diagnostic and Interventional Radiology, Hanover Medical School, Germany
| | - Frank Wacker
- Research Campus STIMULATE, Otto-von-Guericke University Magdeburg, Germany.,Institute of Diagnostic and Interventional Radiology, Hanover Medical School, Germany
| | - Kai Lawonn
- Faculty of Computer Science, University of Koblenz-Landau, Germany
| | - Christian Hansen
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Germany.,Research Campus STIMULATE, Otto-von-Guericke University Magdeburg, Germany
| |
Collapse
|
12
|
Abstract
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.
Collapse
|
13
|
Gerard IJ, Kersten-Oertel M, Drouin S, Hall JA, Petrecca K, De Nigris D, Di Giovanni DA, Arbel T, Collins DL. Combining intraoperative ultrasound brain shift correction and augmented reality visualizations: a pilot study of eight cases. J Med Imaging (Bellingham) 2018; 5:021210. [PMID: 29392162 DOI: 10.1117/1.jmi.5.2.021210] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 01/08/2018] [Indexed: 11/14/2022] Open
Abstract
We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient's cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.
Collapse
Affiliation(s)
- Ian J Gerard
- McGill University, Montreal Neurological Institute and Hospital, Department of Biomedical Engineering, Montreal, Québec, Canada
| | - Marta Kersten-Oertel
- Concordia University, PERFORM Centre, Department of Computer Science and Software Engineering, Montreal, Québec, Canada
| | - Simon Drouin
- McGill University, Montreal Neurological Institute and Hospital, Department of Biomedical Engineering, Montreal, Québec, Canada
| | - Jeffery A Hall
- McGill University, Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, Montreal, Québec, Canada
| | - Kevin Petrecca
- McGill University, Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, Montreal, Québec, Canada
| | - Dante De Nigris
- McGill University, Centre for Intelligent Machines, Department of Electrical and Computer Engineering, Montreal, Québec, Canada
| | - Daniel A Di Giovanni
- McGill University, Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, Montreal, Québec, Canada
| | - Tal Arbel
- McGill University, Centre for Intelligent Machines, Department of Electrical and Computer Engineering, Montreal, Québec, Canada
| | - D Louis Collins
- McGill University, Montreal Neurological Institute and Hospital, Department of Biomedical Engineering, Montreal, Québec, Canada.,McGill University, Montreal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, Montreal, Québec, Canada.,McGill University, Centre for Intelligent Machines, Department of Electrical and Computer Engineering, Montreal, Québec, Canada
| |
Collapse
|
14
|
Gao Y, Li J, Li J, Wang S. Modeling the convergence accommodation of stereo vision for binocular endoscopy. Int J Med Robot 2017; 14. [PMID: 29052314 DOI: 10.1002/rcs.1866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 07/21/2017] [Accepted: 09/01/2017] [Indexed: 11/10/2022]
Abstract
BACKGROUND The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). METHODS A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. RESULTS Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. CONCLUSIONS This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS.
Collapse
Affiliation(s)
- Yuanqian Gao
- School of Mechanical Engineering, Tianjin University, China
| | - Jinhua Li
- School of Mechanical Engineering, Tianjin University, China
| | - Jianmin Li
- School of Mechanical Engineering, Tianjin University, China
| | - Shuxin Wang
- School of Mechanical Engineering, Tianjin University, China
| |
Collapse
|
15
|
Batmaz AU, de Mathelin M, Dresp-Langley B. Seeing virtual while acting real: Visual display and strategy effects on the time and precision of eye-hand coordination. PLoS One 2017; 12:e0183789. [PMID: 28859092 PMCID: PMC5578485 DOI: 10.1371/journal.pone.0183789] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Accepted: 08/11/2017] [Indexed: 11/18/2022] Open
Abstract
Effects of different visual displays on the time and precision of bare-handed or tool-mediated eye-hand coordination were investigated in a pick-and-place-task with complete novices. All of them scored well above average in spatial perspective taking ability and performed the task with their dominant hand. Two groups of novices, four men and four women in each group, had to place a small object in a precise order on the centre of five targets on a Real-world Action Field (RAF), as swiftly as possible and as precisely as possible, using a tool or not (control). Each individual session consisted of four visual display conditions. The order of conditions was counterbalanced between individuals and sessions. Subjects looked at what their hands were doing 1) directly in front of them (“natural” top-down view) 2) in top-down 2D fisheye view 3) in top-down undistorted 2D view or 4) in 3D stereoscopic top-down view (head-mounted OCULUS DK 2). It was made sure that object movements in all image conditions matched the real-world movements in time and space. One group was looking at the 2D images with the monitor positioned sideways (sub-optimal); the other group was looking at the monitor placed straight ahead of them (near-optimal). All image viewing conditions had significantly detrimental effects on time (seconds) and precision (pixels) of task execution when compared with “natural” direct viewing. More importantly, we find significant trade-offs between time and precision between and within groups, and significant interactions between viewing conditions and manipulation conditions. The results shed new light on controversial findings relative to visual display effects on eye-hand coordination, and lead to conclude that differences in camera systems and adaptive strategies of novices are likely to explain these.
Collapse
Affiliation(s)
- Anil U. Batmaz
- ICube Lab Robotics Department, University of Strasbourg, 1 Place de l'Hôpital, Strasbourg, France
| | - Michel de Mathelin
- ICube Lab Robotics Department, University of Strasbourg, 1 Place de l'Hôpital, Strasbourg, France
| | - Birgitta Dresp-Langley
- ICube Lab Cognitive Science Department, Centre National de la Recherche Scientifique, 1 Place de l'Hôpital, Strasbourg, France
- * E-mail:
| |
Collapse
|
16
|
Cutolo F, Meola A, Carbone M, Sinceri S, Cagnazzo F, Denaro E, Esposito N, Ferrari M, Ferrari V. A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom. Comput Assist Surg (Abingdon) 2017; 22:39-53. [PMID: 28754068 DOI: 10.1080/24699322.2017.1358400] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Affiliation(s)
- Fabrizio Cutolo
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| | - Antonio Meola
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marina Carbone
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Sara Sinceri
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | | | - Ennio Denaro
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Nicola Esposito
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Vascular Surgery, Pisa University Medical School, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| |
Collapse
|
17
|
Lind AJ, Bruckner S. Comparing Cross-Sections and 3D Renderings for Surface Matching Tasks Using Physical Ground Truths. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:781-790. [PMID: 27875192 DOI: 10.1109/tvcg.2016.2598602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Within the visualization community there are some well-known techniques for visualizing 3D spatial data and some general assumptions about how perception affects the performance of these techniques in practice. However, there is a lack of empirical research backing up the possible performance differences among the basic techniques for general tasks. One such assumption is that 3D renderings are better for obtaining an overview, whereas cross sectional visualizations such as the commonly used Multi-Planar Reformation (MPR) are better for supporting detailed analysis tasks. In the present study we investigated this common assumption by examining the difference in performance between MPR and 3D rendering for correctly identifying a known surface. We also examined whether prior experience working with image data affects the participant's performance, and whether there was any difference between interactive or static versions of the visualizations. Answering this question is important because it can be used as part of a scientific and empirical basis for determining when to use which of the two techniques. An advantage of the present study compared to other studies is that several factors were taken into account to compare the two techniques. The problem was examined through an experiment with 45 participants, where physical objects were used as the known surface (ground truth). Our findings showed that: 1. The 3D renderings largely outperformed the cross sections; 2. Interactive visualizations were partially more effective than static visualizations; and 3. The high experience group did not generally outperform the low experience group.
Collapse
|
18
|
Wang X, Habert S, Zu Berge CS, Fallavollita P, Navab N. Inverse visualization concept for RGB-D augmented C-arms. Comput Biol Med 2016; 77:135-47. [PMID: 27544070 DOI: 10.1016/j.compbiomed.2016.08.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Revised: 08/03/2016] [Accepted: 08/10/2016] [Indexed: 11/19/2022]
Abstract
X-ray is still the essential imaging for many minimally-invasive interventions. Overlaying X-ray images with an optical view of the surgery scene has been demonstrated to be an efficient way to reduce radiation exposure and surgery time. However, clinicians are recommended to place the X-ray source under the patient table while the optical view of the real scene must be captured from the top in order to see the patient, surgical tools, and the surgical site. With the help of a RGB-D (red-green-blue-depth) camera, which can measure depth in addition to color, the 3D model of the real scene is registered to the X-ray image. However, fusing two opposing viewpoints and visualizing them in the context of medical applications has never been attempted. In this paper, we propose first experiences of a novel inverse visualization technique for RGB-D augmented C-arms. A user study consisting of 16 participants demonstrated that our method shows a meaningful visualization with potential in providing clinicians multi-modal fused data in real-time during surgery.
Collapse
Affiliation(s)
- Xiang Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Computer Aided Medical Procedures, Technische Universität München, Germany.
| | - Severine Habert
- Computer Aided Medical Procedures, Technische Universität München, Germany
| | | | | | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Germany; Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
19
|
Drouin S, Kochanowska A, Kersten-Oertel M, Gerard IJ, Zelmann R, De Nigris D, Bériault S, Arbel T, Sirhan D, Sadikot AF, Hall JA, Sinclair DS, Petrecca K, DelMaestro RF, Collins DL. IBIS: an OR ready open-source platform for image-guided neurosurgery. Int J Comput Assist Radiol Surg 2016; 12:363-378. [DOI: 10.1007/s11548-016-1478-0] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 08/19/2016] [Indexed: 10/21/2022]
|
20
|
Augmented reality in neurosurgery: a systematic review. Neurosurg Rev 2016; 40:537-548. [PMID: 27154018 DOI: 10.1007/s10143-016-0732-9] [Citation(s) in RCA: 172] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2015] [Revised: 02/18/2016] [Accepted: 03/13/2016] [Indexed: 12/16/2022]
Abstract
Neuronavigation has become an essential neurosurgical tool in pursuing minimal invasiveness and maximal safety, even though it has several technical limitations. Augmented reality (AR) neuronavigation is a significant advance, providing a real-time updated 3D virtual model of anatomical details, overlaid on the real surgical field. Currently, only a few AR systems have been tested in a clinical setting. The aim is to review such devices. We performed a PubMed search of reports restricted to human studies of in vivo applications of AR in any neurosurgical procedure using the search terms "Augmented reality" and "Neurosurgery." Eligibility assessment was performed independently by two reviewers in an unblinded standardized manner. The systems were qualitatively evaluated on the basis of the following: neurosurgical subspecialty of application, pathology of treated lesions and lesion locations, real data source, virtual data source, tracking modality, registration technique, visualization processing, display type, and perception location. Eighteen studies were included during the period 1996 to September 30, 2015. The AR systems were grouped by the real data source: microscope (8), hand- or head-held cameras (4), direct patient view (2), endoscope (1), and X-ray fluoroscopy (1) head-mounted display (1). A total of 195 lesions were treated: 75 (38.46 %) were neoplastic, 77 (39.48 %) neurovascular, and 1 (0.51 %) hydrocephalus, and 42 (21.53 %) were undetermined. Current literature confirms that AR is a reliable and versatile tool when performing minimally invasive approaches in a wide range of neurosurgical diseases, although prospective randomized studies are not yet available and technical improvements are needed.
Collapse
|
21
|
Marino J, Kaufman A. Planar Visualization of Treelike Structures. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:906-915. [PMID: 26529735 DOI: 10.1109/tvcg.2015.2467413] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a novel method to create planar visualizations of treelike structures (e.g., blood vessels and airway trees) where the shape of the object is well preserved, allowing for easy recognition by users familiar with the structures. Based on the extracted skeleton within the treelike object, a radial planar embedding is first obtained such that there are no self-intersections of the skeleton which would have resulted in occlusions in the final view. An optimization procedure which adjusts the angular positions of the skeleton nodes is then used to reconstruct the shape as closely as possible to the original, according to a specified view plane, which thus preserves the global geometric context of the object. Using this shape recovered embedded skeleton, the object surface is then flattened to the plane without occlusions using harmonic mapping. The boundary of the mesh is adjusted during the flattening step to account for regions where the mesh is stretched over concavities. This parameterized surface can then be used either as a map for guidance during endoluminal navigation or directly for interrogation and decision making. Depth cues are provided with a grayscale border to aid in shape understanding. Examples are presented using bronchial trees, cranial and lower limb blood vessels, and upper aorta datasets, and the results are evaluated quantitatively and with a user study.
Collapse
|
22
|
Kersten-Oertel M, Gerard I, Drouin S, Mok K, Sirhan D, Sinclair DS, Collins DL. Augmented reality in neurovascular surgery: feasibility and first uses in the operating room. Int J Comput Assist Radiol Surg 2015; 10:1823-36. [DOI: 10.1007/s11548-015-1163-8] [Citation(s) in RCA: 67] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2014] [Accepted: 02/10/2015] [Indexed: 11/24/2022]
|