1
|
Tanbeer SK, Sykes ER. MiVitals- Mi xed Reality Interface for Vitals Monitoring: A HoloLens based prototype for healthcare practices. Comput Struct Biotechnol J 2024; 24:160-175. [PMID: 39803334 PMCID: PMC11724764 DOI: 10.1016/j.csbj.2024.02.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/26/2024] [Accepted: 02/26/2024] [Indexed: 01/16/2025] Open
Abstract
In this paper, we introduce MiVitals-a Mixed Reality (MR) system designed for healthcare professionals to monitor patients in wards or clinics. We detail the design, development, and evaluation of MiVitals, which integrates real-time vital signs from a biosensor-equipped wearable, Vitaliti TM. The system generates holographic visualizations, allowing healthcare professionals to interact with medical charts and information panels holographically. These visualizations display vital signs, trends, other significant physiological signals, and medical early warning scores in a comprehensive manner. We conducted a User Interface/User Experience (UI/UX) study focusing on novel holographic visualizations and interfaces that intuitively present medical information. This approach brings traditional bedside medical information to life in the real environment through non-contact 3D images, supporting rapid decision-making, vital pattern and anomaly detection, and enhancing clinicians' performance in wards. Additionally, we present findings from a usability study involving medical doctors and healthcare practitioners to assess MiVitals' efficacy. The System Usability Scale study yielded a score of 84, indicating that the MiVitals system has high usability.
Collapse
Affiliation(s)
- Syed K Tanbeer
- Centre for Mobile Innovation (CMI), Sheridan College, Oakville, Ontario, Canada
| | | |
Collapse
|
2
|
Chen L, Zhang F, Zhan W, Gan M, Sun L. Optimization of virtual and real registration technology based on augmented reality in a surgical navigation system. Biomed Eng Online 2020; 19:1. [PMID: 31915014 PMCID: PMC6950982 DOI: 10.1186/s12938-019-0745-z] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 12/30/2019] [Indexed: 12/19/2022] Open
Abstract
Background The traditional navigation interface was intended only for two-dimensional observation by doctors; thus, this interface does not display the total spatial information for the lesion area. Surgical navigation systems have become essential tools that enable for doctors to accurately and safely perform complex operations. The image navigation interface is separated from the operating area, and the doctor needs to switch the field of vision between the screen and the patient’s lesion area. In this paper, augmented reality (AR) technology was applied to spinal surgery to provide more intuitive information to surgeons. The accuracy of virtual and real registration was improved via research on AR technology. During the operation, the doctor could observe the AR image and the true shape of the internal spine through the skin. Methods To improve the accuracy of virtual and real registration, a virtual and real registration technique based on an improved identification method and robot-assisted method was proposed. The experimental method was optimized by using the improved identification method. X-ray images were used to verify the effectiveness of the puncture performed by the robot. Results The final experimental results show that the average accuracy of the virtual and real registration based on the general identification method was 9.73 ± 0.46 mm (range 8.90–10.23 mm). The average accuracy of the virtual and real registration based on the improved identification method was 3.54 ± 0.13 mm (range 3.36–3.73 mm). Compared with the virtual and real registration based on the general identification method, the accuracy was improved by approximately 65%. The highest accuracy of the virtual and real registration based on the robot-assisted method was 2.39 mm. The accuracy was improved by approximately 28.5% based on the improved identification method. Conclusion The experimental results show that the two optimized methods are highly very effective. The proposed AR navigation system has high accuracy and stability. This system may have value in future spinal surgeries.
Collapse
Affiliation(s)
- Long Chen
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China
| | - Fengfeng Zhang
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China. .,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China.
| | - Wei Zhan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Minfeng Gan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Lining Sun
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China.,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China
| |
Collapse
|
3
|
HoloLens-Based AR System with a Robust Point Set Registration Algorithm. SENSORS 2019; 19:s19163555. [PMID: 31443237 PMCID: PMC6721543 DOI: 10.3390/s19163555] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 07/28/2019] [Accepted: 08/13/2019] [Indexed: 11/30/2022]
Abstract
By the standard of today’s image-guided surgery (IGS) technology, in order to check and verify the progress of the surgery, the surgeons still require divert their attention from the patients occasionally to check against the display. In this paper, a mixed-reality system for medical use is proposed that combines an Intel RealSense sensor with Microsoft’s Hololens head-mounted display system, for superimposing medical data onto the physical surface of a patient, so the surgeons do not need to divert their attention from their patients. The main idea of our proposed system is to display the 3D medical images of the patients on the actual patients themselves by placing the medical images and the patients in the same coordinate space. However, the virtual medical data may contain noises and outliers, so the transformation mapping function must be able to handle these problems. The transform function in our system is performed by the use of our proposed Denoised-Resampled-Weighted-and-Perturbed-Iterative Closest Points (DRWP-ICP) algorithm, which performs denoising and removal of outliers before aligning the pre-operative medical image data points to the patient’s physical surface position before displaying the result using the Microsoft HoloLens display system. The experimental results shows that our proposed mixed-reality system using DRWP-ICP is capable of performing accurate and robust mapping despite the presence of noise and outliers.
Collapse
|
4
|
Wu ML, Chien JC, Wu CT, Lee JD. An Augmented Reality System Using Improved-Iterative Closest Point Algorithm for On-Patient Medical Image Visualization. SENSORS (BASEL, SWITZERLAND) 2018; 18:E2505. [PMID: 30071645 PMCID: PMC6111829 DOI: 10.3390/s18082505] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/17/2018] [Accepted: 07/17/2018] [Indexed: 12/02/2022]
Abstract
In many surgery assistance systems, cumbersome equipment or complicated algorithms are often introduced to build the whole system. To build a system without cumbersome equipment or complicated algorithms, and to provide physicians the ability to observe the location of the lesion in the course of surgery, an augmented reality approach using an improved alignment method to image-guided surgery (IGS) is proposed. The system uses RGB-Depth sensor in conjunction with the Point Cloud Library (PCL) to build and establish the patient's head surface information, and, through the use of the improved alignment algorithm proposed in this study, the preoperative medical imaging information obtained can be placed in the same world-coordinates system as the patient's head surface information. The traditional alignment method, Iterative Closest Point (ICP), has the disadvantage that an ill-chosen starting position will result only in a locally optimal solution. The proposed improved para-alignment algorithm, named improved-ICP (I-ICP), uses a stochastic perturbation technique to escape from locally optimal solutions and reach the globally optimal solution. After the alignment, the results will be merged and displayed using Microsoft's HoloLens Head-Mounted Display (HMD), and allows the surgeon to view the patient's head at the same time as the patient's medical images. In this study, experiments were performed using spatial reference points with known positions. The experimental results show that the proposed improved alignment algorithm has errors bounded within 3 mm, which is highly accurate.
Collapse
Affiliation(s)
- Ming-Long Wu
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan.
| | - Jong-Chih Chien
- Degree Program of Digital Space and Product Design, Kainan University, Taoyuan 333, Taiwan.
| | - Chieh-Tsai Wu
- Department of Neurosurgery, Chang Gung Memorial Hospital, LinKou, Taoyuan 333, Taiwan.
| | - Jiann-Der Lee
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan.
- Department of Neurosurgery, Chang Gung Memorial Hospital, LinKou, Taoyuan 333, Taiwan.
- Department of Electrical Engineering, Ming Chi University of Technology, New Taipei City 24301, Taiwan.
| |
Collapse
|
5
|
Visual Interface Evaluation for Wearables Datasets: Predicting the Subjective Augmented Vision Image QoE and QoS. FUTURE INTERNET 2017. [DOI: 10.3390/fi9030040] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
6
|
Seeling P. Augmented Reality Device Operator Cognitive Strain Determination and Prediction. AIMS ELECTRONICS AND ELECTRICAL ENGINEERING 2017. [DOI: 10.3934/electreng.2017.1.100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
7
|
Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study. Int J Oral Sci 2013; 5:98-102. [PMID: 23703710 PMCID: PMC3707071 DOI: 10.1038/ijos.2013.26] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Accepted: 04/22/2013] [Indexed: 12/04/2022] Open
Abstract
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.
Collapse
|
8
|
Projection-based visual guidance for robot-aided RF needle insertion. Int J Comput Assist Radiol Surg 2013; 8:1015-25. [DOI: 10.1007/s11548-013-0897-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Accepted: 04/22/2013] [Indexed: 10/26/2022]
|
9
|
A Non-contact Image-to-Patient Registration Method Using Kinect Sensor and WAP-ICP. STUDIES IN COMPUTATIONAL INTELLIGENCE 2013. [DOI: 10.1007/978-3-642-32172-6_8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
10
|
Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:332-352. [PMID: 21383411 DOI: 10.1109/tvcg.2011.50] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- McConell Brain Imaging Center at the Montreal Neurological Institute (MNI), 3801 University St, Montre´al, QC H3A 2B4, Canada.
| | | | | |
Collapse
|
11
|
Tran HH, Suenaga H, Kuwana K, Masamune K, Dohi T, Nakajima S, Liao H. Augmented reality system for oral surgery using 3D auto stereoscopic visualization. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2011; 14:81-8. [PMID: 22003603 DOI: 10.1007/978-3-642-23623-5_11] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
We present an augmented reality system for oral and maxillofacial surgery in this paper. Instead of being displayed on a separated screen, three-dimensional (3D) virtual presentations of osseous structures and soft tissues are projected onto the patient's body, providing surgeons with exact knowledge of depth information of high risk tissues inside the bone. We employ a 3D integral imaging technique which produce motion parallax in both horizontal and vertical direction over a wide viewing area in this study. In addition, surgeons are able to check the progress of the operation in real-time through an intuitive 3D based interface which is content-rich, hardware accelerated. These features prevent surgeons from penetrating into high risk areas and thus help improve the quality of the operation. Operational tasks such as hole drilling, screw fixation were performed using our system and showed an overall positional error of less than 1 mm. Feasibility of our system was also verified with a human volunteer experiment.
Collapse
|
12
|
Lee JD, Huang CH, Wang ST, Lin CW, Lee ST. Fast-MICP for frameless image-guided surgery. Med Phys 2010; 37:4551-9. [PMID: 20964172 DOI: 10.1118/1.3470097] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In image-guided surgery (IGS) systems, image-to-physical registration is critical for reliable anatomical information mapping and spatial guidance. Conventional stereotactic frame-based or fiducial-based approaches provide accurate registration but are not patient-friendly. This study proposes a frameless cranial IGS system that uses computer vision techniques to replace the frame or fiducials with the natural features of the patient. METHODS To perform a cranial surgery with the proposed system, the facial surface of the patient is first reconstructed by stereo vision. Accuracy is ensured by capturing parallel-line patterns projected from a calibrated LCD projector. Meanwhile, another facial surface is reconstructed from preoperative computed tomography (CT) images of the patient. The proposed iterative closest point (ICP)-based algorithm [fast marker-added ICP (Fast-MICP)] is then used to register the two facial data sets, which transfers the anatomical information from the CT images to the physical space. RESULTS Experimental results reveal that the Fast-MICP algorithm reduces the computational cost of marker-added ICP (J.-D. Lee et al., "A coarse-to-fine surface registration algorithm for frameless brain surgery," in Proceedings of International Conference of the IEEE Engineering in Medicine and Biology Society, 2007, pp. 836-839) to 10% and achieves comparable registration accuracy, which is under 3 mm target registration error (TRE). Moreover, two types of optical-based spatial digitizing devices can be integrated for further surgical navigation. Anatomical information or image-guided surgical landmarks can be projected onto the patient to obtain an immersive augmented reality environment. CONCLUSION The proposed frameless IGS system with stereo vision obtains TRE of less than 3 mm. The proposed Fast-MICP registration algorithm reduces registration time by 90% without compromising accuracy.
Collapse
Affiliation(s)
- Jiann-Der Lee
- Department of Electrical Engineering, Chang Gung University, Tao-Yuan 333, Taiwan
| | | | | | | | | |
Collapse
|