1
|
The Virtual Reality Radiology Workstation: Current Technology and Future Applications. Can Assoc Radiol J 2024:8465371241230278. [PMID: 38362857 DOI: 10.1177/08465371241230278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024] Open
Abstract
Virtual reality (VR) and augmented reality (AR) technology hold potential across many disciplines in medicine to expand the delivery of education and healthcare. VR-AR applications in radiology, in particular, have gained prominence and have demonstrated advantages in many areas within the field. Recently, VR software has emerged to redesign the physical radiology workstation (ie, reading room) to expand the possibilities of diagnostic interpretation. Given the novelty of this technology, there is limited research investigating the potential applications of a simulated radiology workstation. In this review article, we explore VR-simulated reading room technology in its current form and illustrate the practical applications this technology will bring to future radiologists and learners. We also discuss the limitations and barriers to adopting this technology that must be overcome to truly understand its potential benefits. VR reading room technology offers great potential in radiology, but further research is needed to appreciate its benefits and identify areas for improvement. The findings and insights presented in this review contribute to the ongoing discourse on future technological advancements in radiology and healthcare, offering valuable recommendations for further research and practical implementation.
Collapse
|
2
|
Indoor Mapping with Entertainment Devices: Evaluating the Impact of Different Mapping Strategies for Microsoft HoloLens 2 and Apple iPhone 14 Pro. SENSORS (BASEL, SWITZERLAND) 2024; 24:1062. [PMID: 38400220 PMCID: PMC10893111 DOI: 10.3390/s24041062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024]
Abstract
Due to their low cost and portability, using entertainment devices for indoor mapping applications has become a hot research topic. However, the impact of user behavior on indoor mapping evaluation with entertainment devices is often overlooked in previous studies. This article aims to assess the indoor mapping performance of entertainment devices under different mapping strategies. We chose two entertainment devices, the HoloLens 2 and iPhone 14 Pro, for our evaluation work. Based on our previous mapping experience and user habits, we defined four simplified indoor mapping strategies: straight-forward mapping (SFM), left-right alternating mapping (LRAM), round-trip straight-forward mapping (RT-SFM), and round-trip left-right alternating mapping (RT-LRAM). First, we acquired triangle mesh data under each strategy with the HoloLens 2 and iPhone 14 Pro. Then, we compared the changes in data completeness and accuracy between the different devices and indoor mapping applications. Our findings show that compared to the iPhone 14 Pro, the triangle mesh accuracy acquired by the HoloLens 2 has more stable performance under different strategies. Notably, the triangle mesh data acquired by the HoloLens 2 under the RT-LRAM strategy can effectively compensate for missing wall and floor surfaces, mainly caused by furniture occlusion and the low frame rate of the depth-sensing camera. However, the iPhone 14 Pro is more efficient in terms of mapping completeness and can acquire a complete triangle mesh more quickly than the HoloLens 2. In summary, choosing an entertainment device for indoor mapping requires a combination of specific needs and scenes. If accuracy and stability are important, the HoloLens 2 is more suitable; if efficiency and completeness are important, the iPhone 14 Pro is better.
Collapse
|
3
|
The application of extended reality technology-assisted intraoperative navigation in orthopedic surgery. Front Surg 2024; 11:1336703. [PMID: 38375409 PMCID: PMC10875025 DOI: 10.3389/fsurg.2024.1336703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 01/23/2024] [Indexed: 02/21/2024] Open
Abstract
Extended reality (XR) technology refers to any situation where real-world objects are enhanced with computer technology, including virtual reality, augmented reality, and mixed reality. Augmented reality and mixed reality technologies have been widely applied in orthopedic clinical practice, including in teaching, preoperative planning, intraoperative navigation, and surgical outcome evaluation. The primary goal of this narrative review is to summarize the effectiveness and superiority of XR-technology-assisted intraoperative navigation in the fields of trauma, joint, spine, and bone tumor surgery, as well as to discuss the current shortcomings in intraoperative navigation applications. We reviewed titles of more than 200 studies obtained from PubMed with the following search terms: extended reality, mixed reality, augmented reality, virtual reality, intraoperative navigation, and orthopedic surgery; of those 200 studies, 69 related papers were selected for abstract review. Finally, the full text of 55 studies was analyzed and reviewed. They were classified into four groups-trauma, joint, spine, and bone tumor surgery-according to their content. Most of studies that we reviewed showed that XR-technology-assisted intraoperative navigation can effectively improve the accuracy of implant placement, such as that of screws and prostheses, reduce postoperative complications caused by inaccurate implantation, facilitate the achievement of tumor-free surgical margins, shorten the surgical duration, reduce radiation exposure for patients and surgeons, minimize further damage caused by the need for visual exposure during surgery, and provide richer and more efficient intraoperative communication, thereby facilitating academic exchange, medical assistance, and the implementation of remote healthcare.
Collapse
|
4
|
The Accuracy and Absolute Reliability of a Knee Surgery Assistance System Based on ArUco-Type Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:8091. [PMID: 37836921 PMCID: PMC10575457 DOI: 10.3390/s23198091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 09/21/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Recent advances allow the use of Augmented Reality (AR) for many medical procedures. AR via optical navigators to aid various knee surgery techniques (e.g., femoral and tibial osteotomies, ligament reconstructions or menisci transplants) is becoming increasingly frequent. Accuracy in these procedures is essential, but evaluations of this technology still need to be made. Our study aimed to evaluate the system's accuracy using an in vitro protocol. We hypothesised that the system's accuracy was equal to or less than 1 mm and 1° for distance and angular measurements, respectively. Our research was an in vitro laboratory with a 316 L steel model. Absolute reliability was assessed according to the Hopkins criteria by seven independent evaluators. Each observer measured the thirty palpation points and the trademarks to acquire direct angular measurements on three occasions separated by at least two weeks. The system's accuracy in assessing distances had a mean error of 1.203 mm and an uncertainty of 2.062, and for the angular values, a mean error of 0.778° and an uncertainty of 1.438. The intraclass correlation coefficient was for all intra-observer and inter-observers, almost perfect or perfect. The mean error for the distance's determination was statistically larger than 1 mm (1.203 mm) but with a trivial effect size. The mean error assessing angular values was statistically less than 1°. Our results are similar to those published by other authors in accuracy analyses of AR systems.
Collapse
|
5
|
Preoperative planning in reverse shoulder arthroplasty: plain radiographs vs. computed tomography scan vs. navigation vs. augmented reality. ANNALS OF JOINT 2023; 8:37. [PMID: 38529225 PMCID: PMC10929295 DOI: 10.21037/aoj-23-20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/11/2023] [Indexed: 03/27/2024]
Abstract
Reverse shoulder arthroplasty (RSA) has become a highly successful treatment option for various shoulder conditions, leading to a significant increase in its utilization since its approval in 2003. However, postoperative complications, including scapular notching, prosthetic instability, and component loosening, remain a concern. These complications can often be attributed to technical errors during component implantation, emphasizing the importance of proper preoperative planning and accurate positioning of prosthetic components. Improper baseplate and glenosphere positioning in RSA have been linked to impingement, reduced range of motion, and increased scapular notching. Additionally, the relationship between component positioning and intrinsic stability of RSA has been established, with glenoid component retroversion exceeding 10° posing a risk to implant stability. Adequate initial glenoid baseplate fixation, achieved through optimal seating and the use of appropriate screws, is crucial for long-term success and prevention of early failure. Factors such as lateralization and distalization also influence outcomes and complications in RSA, yet standardized guidelines for preoperative planning in these parameters are still lacking. Despite the impact of component position on outcomes, glenoid component implantation remains challenging, with position errors being common even among experienced surgeons. Challenges arise due to factors such as deformity, bone defects, limited exposure, and the absence of reliable bony landmarks intraoperatively. With the evolving understanding of RSA biomechanics and the significance of implant configuration and positioning, advancements in preoperative planning and surgical aids have emerged. This review article explores the current evidence on preoperative planning techniques in RSA, including plain radiographs, three-dimensional imaging, computer planning software, intraoperative navigation, and augmented reality (AR), highlighting their potential benefits and advancements in improving implant position accuracy.
Collapse
|
6
|
A Neurophysiological Evaluation of Cognitive Load during Augmented Reality Interactions in Various Industrial Maintenance and Assembly Tasks. SENSORS (BASEL, SWITZERLAND) 2023; 23:7698. [PMID: 37765755 PMCID: PMC10536580 DOI: 10.3390/s23187698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/03/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023]
Abstract
Augmented reality (AR) has been shown to improve productivity in industry, but its adverse effects (e.g., headaches, eye strain, nausea, and mental workload) on users warrant further investigation. The objective of this study is to investigate the effects of different instruction methods (i.e., HoloLens AR-based and paper-based instructions) and task complexity (low and high-demanding tasks) on cognitive workloads and performance. Twenty-eight healthy males with a mean age of 32.12 (SD 2.45) years were recruited in this study and were randomly divided into two groups. The first group performed the experiment using AR-based instruction, and the second group used paper-based instruction. Performance was measured using total task time (TTT). The cognitive workload was measured using the power of electroencephalograph (EEG) features and the NASA task load index (NASA TLX). The results showed that using AR instructions resulted in a reduction in maintenance times and an increase in mental workload compared to paper instructions, particularly for the more demanding tasks. With AR instruction, 0.45% and 14.94% less time was spent on low- and high-demand tasks, respectively, as compared to paper instructions. According to the EEG features, employing AR to guide employees during highly demanding maintenance tasks increased information processing, which could be linked with an increased germane cognitive load. Increased germane cognitive load means participants can better facilitate long-term knowledge and skill acquisition. These results suggested that AR is superior and recommended for highly demanding maintenance tasks since it speeds up maintenance times and increases the possibility that information is stored in long-term memory and encrypted for recalls.
Collapse
|
7
|
Accessibility and inclusiveness of new information and communication technologies for disabled users and content creators in the Metaverse. Disabil Rehabil Assist Technol 2023:1-15. [PMID: 37585705 DOI: 10.1080/17483107.2023.2241882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 07/07/2023] [Accepted: 07/20/2023] [Indexed: 08/18/2023]
Abstract
PURPOSE The study critically reassesses existing Metaverse concepts and proposes a novel framework for inclusiveness of physically disabled artists. The purpose is to enable and inspire physically disabled users and content creators to participate in the evolving concept of the Metaverse. The article also highlights the need for standards and regulations governing the inclusion of people with disabilities in Metaverse projects. MATERIALS AND METHODS The study examines current information technologies and their relevance to the inclusion of physically disabled individuals in the Metaverse. We analyse existing Metaverse concepts, exploring emerging information technologies such as Virtual and Augmented Reality, and the Internet of Things. The emerging framework in the article is based on the active involvement of disabled creatives in the development of solutions for inclusivity. RESULTS The review reveals that despite the proliferation of Blockchain Metaverse projects, the inclusion of physically disabled individuals in the Metaverse remains distant, with limited standards and regulations in place. The article proposes a concept of the Metaverse that leverages emerging technologies, to enable greater engagement of disabled creatives. This approach is designed to enhance inclusiveness in the Metaverse landscape. CONCLUSIONS Active involvement of physically disabled individuals in the design and development of Metaverse platforms is crucial for promoting inclusivity. The framework for accessibility and inclusiveness in decentralised Metaverses provides a basis for the meaningful participation of disabled creatives. The article emphasises the importance of addressing the mechanisms for art production by individuals with disabilities in the emerging Metaverse landscape.IMPLICATIONS FOR REHABILITATIONThis article addresses a global challenge related to helping disabled people operate in the modern society, targeting new and emerging technologies, and enabling early understanding of required actions for inclusiveness of people with disabilities in the Metaverse.The increased use of advanced technologies (e.g., AI and IoT) in the Metaverse, amplified the importance of this research being conducted.The aggregate impact from this research for science and society is a more inclusive, and unbiassed Metaverses that are compliant with regulations on anti-disability discrimination. This is followed by the secondary values, related to increased technological opportunities from a breakthrough in designing new, more inclusive, and autonomous devices.The research study presents a new framework for integrating new technologies in existing Metaverses, resulting with a stronger accessibility and inclusiveness of the Metaverse. This creates a new understanding on how new technologies can be used for disability discrimination prevention and early understanding of disability requirements. We also highlighted normative constraints and the need for further reflection and weighing to avoid dystopian futures for the physically disabled in relation to the Metaverse.
Collapse
|
8
|
Technologic advances in robot-assisted nephron sparing surgery: a narrative review. Transl Androl Urol 2023; 12:1184-1198. [PMID: 37554533 PMCID: PMC10406549 DOI: 10.21037/tau-23-107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 07/07/2023] [Indexed: 08/10/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Nephron sparing surgery (NSS) is the preferred management for clinical stage T1 (cT1) renal masses. In recent years, indications have expanded to larger and more complex renal tumors. In an effort to provide optimal patient outcomes, urologists strive to achieve the pentafecta when performing partial nephrectomy. This has led to the continuous technologic advancement and technique refinement including the use of augmented reality, ultrasound techniques, changes in surgical approach and reconstruction, uses of novel fluorescence marker guided imaging, and implementation of early recovery after surgery (ERAS) protocols. The aim of this narrative review is to provide an overview of the recent advances in pre-, intra-, and post-operative management and approaches to managing patients with renal masses undergoing NSS. METHODS We performed a non-systematic literature search of PubMed and MEDLINE for the most relevant articles pertaining to the outlined topics from 2010 to 2022 without limitation on study design. We included only full-text English articles published in peer-reviewed journals. KEY CONTENT AND FINDINGS Partial nephrectomy is currently prioritized for cT1a renal masses; however, indications have been expanding due to a greater understanding of anatomy and technologic advances. Recent studies have demonstrated that improvements in imaging techniques utilizing cross-sectional imaging with three-dimensional (3D) reconstruction, use of color doppler intraoperative ultrasound, and newer studies emerging using contrast enhanced ultrasound play important roles in certain subsets of patients. While indocyanine green administration is commonly used, novel fluorescence-guided imaging including folate receptor-targeting fluorescence molecules are being investigated to better delineate tumor-parenchyma margins. Augmented reality has a developing role in patient and surgical trainee education. While pre-and intra-operative imaging have shown to be promising, near infrared guided segmental and sub-segmental vessel clamping has yet to show significant benefit in patient outcomes. Studies regarding reconstructive techniques and replacement of reconstruction with sealing agents have a promising future. Finally, ERAS protocols have allowed earlier discharge of patients without increasing complications while improving cost burden. CONCLUSIONS Advances in NSS have ranged from pre-operative imaging techniques to ERAS protocols Further prospective investigations are required to determine the impact of novel imaging, in-vivo fluorescence biomarker use, and reconstructive techniques on achieving the pentafecta of NSS.
Collapse
|
9
|
Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6202. [PMID: 37448050 DOI: 10.3390/s23136202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 06/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future.
Collapse
|
10
|
Human-Robot Collaborations in Smart Manufacturing Environments: Review and Outlook. SENSORS (BASEL, SWITZERLAND) 2023; 23:5663. [PMID: 37420834 DOI: 10.3390/s23125663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 06/07/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
The successful implementation of Human-Robot Collaboration (HRC) has become a prominent feature of smart manufacturing environments. Key industrial requirements, such as flexibility, efficiency, collaboration, consistency, and sustainability, present pressing HRC needs in the manufacturing sector. This paper provides a systemic review and an in-depth discussion of the key technologies currently being employed in smart manufacturing with HRC systems. The work presented here focuses on the design of HRC systems, with particular attention given to the various levels of Human-Robot Interaction (HRI) observed in the industry. The paper also examines the key technologies being implemented in smart manufacturing, including Artificial Intelligence (AI), Collaborative Robots (Cobots), Augmented Reality (AR), and Digital Twin (DT), and discusses their applications in HRC systems. The benefits and practical instances of deploying these technologies are showcased, emphasizing the substantial prospects for growth and improvement in sectors such as automotive and food. However, the paper also addresses the limitations of HRC utilization and implementation and provides some insights into how the design of these systems should be approached in future work and research. Overall, this paper provides new insights into the current state of HRC in smart manufacturing and serves as a useful resource for those interested in the ongoing development of HRC systems in the industry.
Collapse
|
11
|
Integration of Square Fiducial Markers in Patient-Specific Instrumentation and Their Applicability in Knee Surgery. J Pers Med 2023; 13:jpm13050727. [PMID: 37240897 DOI: 10.3390/jpm13050727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/23/2023] [Accepted: 04/23/2023] [Indexed: 05/28/2023] Open
Abstract
Computer technologies play a crucial role in orthopaedic surgery and are essential in personalising different treatments. Recent advances allow the usage of augmented reality (AR) for many orthopaedic procedures, which include different types of knee surgery. AR assigns the interaction between virtual environments and the physical world, allowing both to intermingle (AR superimposes information on real objects in real-time) through an optical device and allows personalising different processes for each patient. This article aims to describe the integration of fiducial markers in planning knee surgeries and to perform a narrative description of the latest publications on AR applications in knee surgery. Augmented reality-assisted knee surgery is an emerging set of techniques that can increase accuracy, efficiency, and safety and decrease the radiation exposure (in some surgical procedures, such as osteotomies) of other conventional methods. Initial clinical experience with AR projection based on ArUco-type artificial marker sensors has shown promising results and received positive operator feedback. Once initial clinical safety and efficacy have been demonstrated, the continued experience should be studied to validate this technology and generate further innovation in this rapidly evolving field.
Collapse
|
12
|
The Role of Augmented Reality in the Advancement of Minimally Invasive Surgery Procedures: A Scoping Review. Bioengineering (Basel) 2023; 10:bioengineering10040501. [PMID: 37106688 PMCID: PMC10136262 DOI: 10.3390/bioengineering10040501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/13/2023] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
The purpose of this review was to analyze the evidence on the role of augmented reality (AR) in the improvement of minimally invasive surgical (MIS) procedures. A scoping literature search of the PubMed and ScienceDirect databases was performed to identify articles published in the last five years that addressed the direct impact of AR technology on MIS procedures or that addressed an area of education or clinical care that could potentially be used for MIS development. A total of 359 studies were screened and 31 articles were reviewed in depth and categorized into three main groups: Navigation, education and training, and user-environment interfaces. A comparison of studies within the different application groups showed that AR technology can be useful in various disciplines to advance the development of MIS. Although AR-guided navigation systems do not yet offer a precision advantage, benefits include improved ergonomics and visualization, as well as reduced surgical time and blood loss. Benefits can also be seen in improved education and training conditions and improved user-environment interfaces that can indirectly influence MIS procedures. However, there are still technical challenges that need to be addressed to demonstrate added value to patient care and should be evaluated in clinical trials with sufficient patient numbers or even in systematic reviews or meta-analyses.
Collapse
|
13
|
Intraoperative Augmented Reality in Microsurgery for Intracranial Arteriovenous Malformation: A Case Report and Literature Review. Brain Sci 2023; 13:brainsci13040653. [PMID: 37190618 DOI: 10.3390/brainsci13040653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 04/03/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
BACKGROUND Intracranial arteriovenous malformations (AVMs) are lesions containing complex vessels with a lack of buffering capillary architecture which might result in hemorrhagic cerebrovascular accidents (CVAs). Intraoperative navigation can improve resection rates and functional preservation in patients with lesions in eloquent areas, but current systems have limitations that can distract the operator. Augmented Reality (AR) surgical technology can reduce these distractions and provide real-time information regarding vascular morphology and location. METHODS In this case report, an adult patient was admitted to the emergency department after a fall, and diagnostic imaging revealed a Spetzler-Martin grade I AVM in the right parietal region with evidence of rupture. The patient underwent a stereotactic microsurgical resection with assistance from augmented reality technology, which allowed for a hologram of the angioarchitecture to be projected onto the cortical surface, aiding in the recognition of the angiographic anatomy during surgery. RESULTS The patient's postoperative recovery went smoothly. At 6-month follow-up, the patient had remained in stable condition, experiencing complete relief from his previous symptoms. The follow-up examination also revealed complete obliteration of the AVMs without any remaining pathological vascular structure. CONCLUSIONS AR-assisted microsurgery makes both the dissection and resection steps safer and more delicate. As several innovations are occurring in AR technology today, it is likely that this novel technique will be increasingly adopted in both surgical applications and education. Although certain limitations exist, this technique may still become more efficient and precise as this novel technology its continues to develop further.
Collapse
|
14
|
Accuracy of augmented reality-assisted vs template-guided apicoectomy - an ex vivo comparative study. INTERNATIONAL JOURNAL OF COMPUTERIZED DENTISTRY 2023; 26:11-18. [PMID: 35072426 DOI: 10.3290/j.ijcd.b2599279] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
AIM The aim of the present ex vivo study was to examine the accuracy of augmented reality-assisted apicoectomies (AR-A) versus template-guided apicoectomies (TG-A). MATERIALS AND METHODS In total, 40 apicoectomies were performed in 10 cadaver pig mandibles. Every pig mandible underwent two AR-A and two TG-A in molar and premolar teeth. A crossed experimental design was applied. AR-A was performed using Microsoft HoloLens 2, and TG-A using SMOP software. Postoperative CBCT scans were superimposed with the presurgical planning data. The deviation between the virtually planned apicoectomy and the surgically performed apicoectomy was measured. The primary (angular deviation [degrees]) and secondary (depth deviation [mm]) outcome parameters were measured. RESULTS Overall, 36 out of 40 apicoectomies could be included in the study. Regarding the primary outcome parameter (angular deviation), there was no significant difference between AR-A and TG-A. The mean values were 5.33 degrees (± 2.96 degrees) in the AR-A group, and 5.23 degrees (± 2.48 degrees) in the TG-A group. The secondary outcome parameter (depth deviation) showed no significant difference between the AR-A group of 0.27 mm (± 2.32 mm) and the TG-A group of 0.90 mm (± 1.84 mm). In this crossed experimental design, both techniques overshot the target depth in posterior sites, as opposed to not reaching the target depth in anterior sites (P < 0.001). CONCLUSION Augmented reality (AR) technology has the potential to be introduced into apicoectomy surgery in case further development is implemented.
Collapse
|
15
|
Meniscus-Guided Micro-Printing of Prussian Blue for Smart Electrochromic Display. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2205588. [PMID: 36442856 PMCID: PMC9875632 DOI: 10.1002/advs.202205588] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Using energy-saving electrochromic (EC) displays in smart devices for augmented reality makes cost-effective, easily producible, and efficiently operable devices for specific applications possible. Prussian blue (PB) is a metal-organic coordinated compound with unique EC properties that limit EC display applications due to the difficulty in PB micro-patterning. This work presents a novel micro-printing strategy for PB patterns using localized crystallization of FeFe(CN)6 on a substrate confined by the acidic-ferric-ferricyanide ink meniscus, followed by thermal reduction at 120 °C, thereby forming PB. Uniform PB patterns can be obtained by manipulating printing parameters, such as the concentration of FeCl3 ·K3 Fe(CN)6 , printing speed, and pipette inner diameter. Using a 0.1 M KCl (pH 4) electrolyte, the printed PB pattern is consistently and reversibly converted to Prussian white (CV potential range: -0.2-0.5 V) with 200 CV cycles. The PB-based EC display with a navigation function integrated into a smart contact lens is able to display directions to a destination to a user by receiving GPS coordinates in real time. This facile method for forming PB micro-patterns could be used for advanced EC displays and various functional devices.
Collapse
|
16
|
Three-Dimensional Engine-Based Geometric Model Optimization Algorithm for BIM Visualization with Augmented Reality. SENSORS (BASEL, SWITZERLAND) 2022; 22:7622. [PMID: 36236723 PMCID: PMC9572394 DOI: 10.3390/s22197622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 10/03/2022] [Accepted: 10/03/2022] [Indexed: 06/16/2023]
Abstract
Building information modeling (BIM), a common technology contributing to information processing, is extensively applied in construction fields. BIM integration with augmented reality (AR) is flourishing in the construction industry, as it provides an effective solution for the lifecycle of a project. However, when applying BIM to AR data transfer, large and complicated models require large storage spaces, increase the model transfer time and data processing workload during rendering, and reduce visualization efficiency when using AR devices. The geometric optimization of the model using mesh reconstruction is a potential solution that can reduce the required storage while maintaining the shape of the components. In this study, a 3D engine-based mesh reconstruction algorithm that can pre-process BIM shape data and implement an AR-based full-size model is proposed, which is likely to increase the efficiency of decision making and project processing for construction management. As shown in the experimental validation, the proposed algorithm significantly reduces the number of vertices, triangles, and storage for geometric models while maintaining the overall shape. Moreover, the model elements and components of the optimized model have the same visual quality as the original model; thus, a high performance can be expected for BIM visualization in AR devices.
Collapse
|
17
|
Virtual and Augmented Reality-based Treatments for Phantom Limb Pain: A Systematic Review. INNOVATIONS IN CLINICAL NEUROSCIENCE 2022; 19:48-57. [PMID: 36591552 PMCID: PMC9776775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Objective To evaluate the literature on the effectiveness of virtual reality (VR)- and augmented reality (AR)-based treatments for phantom limb pain (PLP) in postamputation or brachial plexus avulsion (BPA) populations. Methods Multiple databases were queried in July 2021 with the keywords "virtual reality," "augmented reality," and "phantom limb pain." Included studies utilized VR or AR to treat PLP with outcome measurement. Two independent reviewers assessed methodological quality using the Physiotherapy Evidence Databsae (PEDro) Scale and the Methodological Index for Nonrandomized Studies (MINORS) scoring. Studies were separated into immersive and nonimmersive AR/VR systems, with further categorization according to the specific methodologies used. Results Of 110 results from the database queries, 20 publications met the inclusion criteria. There was one unblinded, randomized, control trial (RCT), one single-blinded, randomized, crossover trial (RCxT), three comparative case series, 13 noncomparative case series, and two case reports. Seven of the 20 studies were classified as nonimmersive. Six studies reported decreased PLP after AR/VR treatments, of which four reported significant reductions. One study reported a reduction in PLP with no significant difference from control conditions. Thirteen of the 20 studies were classified as immersive AR/VR. Twelve studies reported decreased PLP after AR/VR treatments, of which eight reported significant reductions. One study found no change in PLP, compared to baseline. Conclusion The number of studies using AR/VR in PLP treatment has expanded since a 2017 review on the topic. The majority of these studies offer support for the efficacy of treating PLP with AR/VR-based treatments. Research has expanded on the customization, outcome measurements, and statistical analysis of AR/VR treatments. While results are promising, most publications remain at the case series level, and clinical indications should be cautioned. With improvements in the quality of evidence, there remain avenues for further investigations, including increased sampling, randomization, optimization of treatment duration, and comparisons to alternative therapies.
Collapse
|
18
|
Editing reality in the brain. Neurosci Conscious 2022; 2022:niac009. [PMID: 35903411 PMCID: PMC9319104 DOI: 10.1093/nc/niac009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 05/30/2022] [Accepted: 06/17/2022] [Indexed: 11/21/2022] Open
Abstract
Recent information technologies such as virtual reality (VR) and augmented reality (AR)
allow the creation of simulated sensory worlds with which we can interact. Using
programming language, digital details can be overlaid onto displays of our environment,
confounding what is real and what has been artificially engineered. Natural language,
particularly the use of direct verbal suggestion (DVS) in everyday and hypnotic contexts,
can also manipulate the meaning and significance of objects and events in ourselves and
others. In this review, we focus on how socially rewarding language can construct and
influence reality. Language is symbolic, automatic and flexible and can be used to augment
bodily sensations e.g. feelings of heaviness in a limb or suggest a colour that is not
there. We introduce the term ‘suggested reality’ (SR) to refer to the important role that
language, specifically DVS, plays in constructing, maintaining and manipulating our shared
reality. We also propose the term edited reality to encompass the wider influence of
information technology and linguistic techniques that results in altered subjective
experience and review its use in clinical settings, while acknowledging its limitations.
We develop a cognitive model indicating how the brain’s central executive structures use
our personal and linguistic-based narrative in subjective awareness, arguing for a central
role for language in DVS. A better understanding of the characteristics of VR, AR and SR
and their applications in everyday life, research and clinical settings can help us to
better understand our own reality and how it can be edited.
Collapse
|
19
|
The effect of stimulus number on the recognition accuracy and information transfer rate of SSVEP-BCI in augmented reality. J Neural Eng 2022; 19. [PMID: 35477130 DOI: 10.1088/1741-2552/ac6ae5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 04/26/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE The biggest advantage of steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) lies in its large command set and high information transfer rate (ITR). Almost all current SSVEP-BCIs use a computer screen (CS) to present flickering visual stimuli, which limits its flexible use in actual scenes. Augmented reality (AR) technology provides the ability to superimpose visual stimuli on the real world, and it considerably expands the application scenarios of SSVEP-BCI. However, whether the advantages of SSVEP-BCI can be maintained when moving the visual stimuli to AR glasses is not known. This study investigated the effects of the stimulus number for SSVEP-BCI in an AR context. APPROACH We designed SSVEP flickering stimulation interfaces with four different numbers of stimulus targets and put them in AR glasses and a CS to display. Three common recognition algorithms were used to analyze the influence of the stimulus number and stimulation time on the recognition accuracy and ITR of AR-SSVEP and CS-SSVEP. MAIN RESULTS The amplitude spectrum and signal-to-noise ratio of AR-SSVEP were not significantly different from CS-SSVEP at the fundamental frequency but were significantly lower than CS-SSVEP at the second harmonic. SSVEP recognition accuracy decreased as the stimulus number increased in AR-SSVEP but not in CS-SSVEP. When the stimulus number increased, the maximum ITR of CS-SSVEP also increased, but not for AR-SSVEP. When the stimulus number was 25, the maximum ITR (142.05 bits/min) was reached at 400 ms. The importance of stimulation time in SSVEP was confirmed. When the stimulation time became longer, the recognition accuracy of both AR-SSVEP and CS-SSVEP increased. The peak value was reached at 3 s. The ITR increased first and then slowly decreased after reaching the peak value. SIGNIFICANCE Our study indicates that the conclusions based on CS-SSVEP cannot be simply applied to AR-SSVEP, and it is not advisable to set too many stimulus targets in the AR display device.
Collapse
|
20
|
Intra-operative wearable visualization in spine surgery: past, present, and future. JOURNAL OF SPINE SURGERY (HONG KONG) 2022; 8:132-138. [PMID: 35441103 PMCID: PMC8990397 DOI: 10.21037/jss-21-95] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/27/2022] [Indexed: 04/15/2023]
Abstract
The history of modern surgery has run parallel to the invention and development of intra-operative visualization techniques. The first operating room, built in 1804 at Pennsylvania Hospital, demonstrates this principle: illumination of the surgical field by the Sun through an overhead skylight allowed surgeries to proceed even prior to the invention of anesthesia or sterile technique. Surgeries were restricted to begin around when the Sun was at its zenith; without adequate light from the Sun and skylight, surgeons were unable to achieve adequate visualization. In the years since, new visualization instruments have expanded the scope and success of surgical intervention. Spine surgery in particular has benefited greatly from improved visualization technologies, due to the complex and intricate nervous, vascular and musculoskeletal structures that are closely intertwined which surgeons must manipulate. Over time, new technologies have also advanced to take up smaller footprints, leading to the rise of wearable tools that surgeons don intra-operatively to better visualize the surgical field. As surgical techniques shift to more minimally invasive methods, reliable, fidelitous, and ergonomic wearables are of growing importance. Here, we discuss the past and present of wearable visualization tools, from the first surgical loupes to cutting-edge augmented reality (AR) goggles, and comment on how emerging innovations will continue to revolutionize spine surgery.
Collapse
|
21
|
P300 Brain-Computer Interface-Based Drone Control in Virtual and Augmented Reality. SENSORS 2021; 21:s21175765. [PMID: 34502655 PMCID: PMC8434009 DOI: 10.3390/s21175765] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/19/2021] [Accepted: 08/24/2021] [Indexed: 01/01/2023]
Abstract
Since the emergence of head-mounted displays (HMDs), researchers have attempted to introduce virtual and augmented reality (VR, AR) in brain–computer interface (BCI) studies. However, there is a lack of studies that incorporate both AR and VR to compare the performance in the two environments. Therefore, it is necessary to develop a BCI application that can be used in both VR and AR to allow BCI performance to be compared in the two environments. In this study, we developed an opensource-based drone control application using P300-based BCI, which can be used in both VR and AR. Twenty healthy subjects participated in the experiment with this application. They were asked to control the drone in two environments and filled out questionnaires before and after the experiment. We found no significant (p > 0.05) difference in online performance (classification accuracy and amplitude/latency of P300 component) and user experience (satisfaction about time length, program, environment, interest, difficulty, immersion, and feeling of self-control) between VR and AR. This indicates that the P300 BCI paradigm is relatively reliable and may work well in various situations.
Collapse
|
22
|
Developing a virtual reality simulation system for preoperative planning of thoracoscopic thoracic surgery. J Thorac Dis 2021; 13:778-783. [PMID: 33717550 PMCID: PMC7947494 DOI: 10.21037/jtd-20-2197] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Background Video-assisted thoracoscopic surgery (VATS) has become a standard approach for the treatment of lung cancer. However, its minimally invasive nature limits the field of view and reduces tactile feedback. These limitations make it vital that surgeons thoroughly familiarize themselves with the patient’s anatomy preoperatively. We have developed a virtual reality (VR) surgical navigation system using head-mounted displays (HMD). The aim of this study was to investigate the potential utility of this VR simulation system in both preoperative planning and intraoperative assistance, including support during thoracoscopic sublobar resection. Methods Three-dimensional (3D) polygon data derived from preoperative computed tomography data was loaded into BananaVision software developed at Colorado State University and displayed on an HMD. An interactive 3D reconstruction image was created, in which all the pulmonary structures could be individually imaged. Preoperative resection simulations were performed with patient-individualized reconstructed 3D images. Results The 3D anatomic structure of pulmonary vessels and a clear vision into the space between the lesion and adjacent tissues were successfully appreciated during preoperative simulation. Surgeons could easily evaluate the real patient’s anatomy in preoperative simulations to improve the accuracy and safety of actual surgery. The VR software and HMD allowed surgeons to visualize and interact with real patient data in true 3D providing a unique perspective. Conclusions This initial experience suggests that a VR simulation with HMD facilitated preoperative simulation. Routine imaging modalities combined with VR systems could substantially improve preoperative planning and contribute to the safety and accuracy of anatomic resection.
Collapse
|
23
|
A Hybrid Approach to Industrial Augmented Reality Using Deep Learning-Based Facility Segmentation and Depth Prediction. SENSORS 2021; 21:s21010307. [PMID: 33466398 PMCID: PMC7796343 DOI: 10.3390/s21010307] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 12/27/2020] [Accepted: 12/30/2020] [Indexed: 12/04/2022]
Abstract
Typical AR methods have generic problems such as visual mismatching, incorrect occlusions, and limited augmentation due to the inability to estimate depth from AR images and attaching the AR markers onto physical objects, which prevents the industrial worker from conducting manufacturing tasks effectively. This paper proposes a hybrid approach to industrial AR for complementing existing AR methods using deep learning-based facility segmentation and depth prediction without AR markers and a depth camera. First, the outlines of physical objects are extracted by applying a deep learning-based instance segmentation method to the RGB image acquired from the AR camera. Simultaneously, a depth prediction method is applied to the AR image to estimate the depth map as a 3D point cloud for the detected object. Based on the segmented 3D point cloud data, 3D spatial relationships among the physical objects are calculated, which can assist in solving the visual mismatch and occlusion problems properly. In addition, it can deal with a dynamically operating or a moving facility, such as a robot—the conventional AR cannot do so. For these reasons, the proposed approach can be utilized as a hybrid or complementing function to existing AR methods, since it can be activated whenever the industrial worker requires handing of visual mismatches or occlusions. Quantitative and qualitative analyses verify the advantage of the proposed approach compared with existing AR methods. Some case studies also prove that the proposed method can be applied not only to manufacturing but also to other fields. These studies confirm the scalability, effectiveness, and originality of this proposed approach.
Collapse
|
24
|
Current innovation in virtual and augmented reality in spine surgery. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:94. [PMID: 33553387 PMCID: PMC7859743 DOI: 10.21037/atm-20-1132] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
In spinal surgery, outcomes are directly related both to patient and procedure selection, as well as the accuracy and precision of instrumentation placed. Poorly placed instrumentation can lead to spinal cord, nerve root or vascular injury. Traditionally, spine surgery was performed by open methods and placement of instrumentation under direct visualization. However, minimally invasive surgery (MIS) has seen substantial advances in spine, with an ever-increasing range of indications and procedures. For these reasons, novel methods to visualize anatomy and precisely guide surgery, such as intraoperative navigation, are extremely useful in this field. In this review, we present the recent advances and innovations utilizing simulation methods in spine surgery. The application of these techniques is still relatively new, however quickly being integrated in and outside the operating room. These include virtual reality (VR) (where the entire simulation is virtual), mixed reality (MR) (a combination of virtual and physical components), and augmented reality (AR) (the superimposition of a virtual component onto physical reality). VR and MR have primarily found applications in a teaching and preparatory role, while AR is mainly applied in hands-on surgical settings. The present review attempts to provide an overview of the latest advances and applications of these methods in the neurosurgical spine setting.
Collapse
|
25
|
A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
|
26
|
The Genetic Code Kit: An Open-Source Cell-Free Platform for Biochemical and Biotechnology Education. Front Bioeng Biotechnol 2020; 8:941. [PMID: 32974303 PMCID: PMC7466673 DOI: 10.3389/fbioe.2020.00941] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 07/21/2020] [Indexed: 01/06/2023] Open
Abstract
Teaching the processes of transcription and translation is challenging due to the intangibility of these concepts and a lack of instructional, laboratory-based, active learning modules. Harnessing the genetic code in vitro with cell-free protein synthesis (CFPS) provides an open platform that allows for the direct manipulation of reaction conditions and biological machinery to enable inquiry-based learning. Here, we report our efforts to transform the research-based CFPS biotechnology into a hands-on module called the “Genetic Code Kit” for implementation into teaching laboratories. The Genetic Code Kit includes all reagents necessary for CFPS, as well as a laboratory manual, student worksheet, and augmented reality activity. This module allows students to actively explore transcription and translation while gaining exposure to an emerging research technology. In our testing of this module, undergraduate students who used the Genetic Code Kit in a teaching laboratory showed significant score increases on transcription and translation questions in a post-lab questionnaire compared with students who did not participate in the activity. Students also demonstrated an increase in self-reported confidence in laboratory methods and comfort with CFPS, indicating that this module helps prepare students for careers in laboratory research. Importantly, the Genetic Code Kit can accommodate a variety of learning objectives beyond transcription and translation and enables hypothesis-driven science. This opens the possibility of developing Course-Based Undergraduate Research Experiences (CUREs) based on the Genetic Code Kit, as well as supporting next-generation science standards in 8–12th grade science courses.
Collapse
|
27
|
Augmented Reality Guided Needle Biopsy of Soft Tissue: A Pilot Study. Front Robot AI 2020; 7:72. [PMID: 33501239 PMCID: PMC7806065 DOI: 10.3389/frobt.2020.00072] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 04/30/2020] [Indexed: 11/24/2022] Open
Abstract
Percutaneous biopsies are popular for extracting suspicious tissue formations (primarily for cancer diagnosis purposes) due to the: relatively low cost, minimal invasiveness, quick procedure times, and low risk for the patient. Despite the advantages provided by percutaneous biopsies, poor needle and tumor visualization is a problem that can result in the clinicians classifying the tumor as benign when it was malignant (false negative). The system developed by the authors aims to address the concern of poor needle and tumor visualization through two virtualization setups. This system is designed to track and visualize the needle and tumor in three-dimensional space using an electromagnetic tracking system. User trials were conducted in which the 10 participants, who were not medically trained, performed a total of 6 tests, each guiding the biopsy needle to the desired location. The users guided the biopsy needle to the desired point on an artificial spherical tumor (diameters of 30, 20, and 10 mm) using the 3D augmented reality (AR) overlay for three trials and a projection on a second monitor (TV) for the other three trials. From the randomized trials, it was found that the participants were able to guide the needle tip 6.5 ± 3.3 mm away from the desired position with an angle deviation of 1.96 ± 1.10° in the AR trials, compared to values of 4.5 ± 2.3 mm and 2.70 ± 1.67° in the TV trials. The results indicate that for simple stationary surgical procedures, an AR display is non-inferior a TV display.
Collapse
|
28
|
Digital Undergraduate Education in Dentistry: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17093269. [PMID: 32392877 PMCID: PMC7246576 DOI: 10.3390/ijerph17093269] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 04/28/2020] [Accepted: 05/02/2020] [Indexed: 12/04/2022]
Abstract
The aim of this systematic review was to investigate current penetration and educational quality enhancements from digitalization in the dental curriculum. Using a modified PICO strategy, the literature was searched using PubMed supplemented with a manual search to identify English-language articles published between 1994 and 2020 that reported the use of digital techniques in dental education. A total of 211 articles were identified by electronic search, of which 55 articles were selected for inclusion and supplemented with 27 additional publications retrieved by manual search, resulting in 82 studies that were included in the review. Publications were categorized into five areas of digital dental education: Web-based knowledge transfer and e-learning, digital surface mapping, dental simulator motor skills (including intraoral optical scanning), digital radiography, and surveys related to the penetration and acceptance of digital education. This review demonstrates that digitalization offers great potential to revolutionize dental education to help prepare future dentists for their daily practice. More interactive and intuitive e-learning possibilities will arise to stimulate an enjoyable and meaningful educational experience with 24/7 facilities. Augmented and virtual reality technology will likely play a dominant role in the future of dental education.
Collapse
|
29
|
Design and Modeling of Light Emitting Nano-Pixel Structure (LENS) for High Resolution Display (HRD) in a Visible Range. NANOMATERIALS 2020; 10:nano10020214. [PMID: 32012673 PMCID: PMC7074958 DOI: 10.3390/nano10020214] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 01/22/2020] [Accepted: 01/24/2020] [Indexed: 11/16/2022]
Abstract
LENS (Light Emitting Nano-pixel Structure), a new nano-metric device, was designed, simulated, and modeled for feasibility analysis, with the challenge of combining high resolution and high brightness for display, essentially adapted for Augmented Reality (AR) and Virtual Reality. The device is made of two parts: The first one is a reflective nano-cone Light Emitting Device (LED) structure to reduce the Total Internal Reflection effects (TIR), and to enable improved light extraction efficiency. The second part is a Compound Parabolic Concentrator (CPC) above the nano-LED to narrow the outgoing light angular distribution so most of the light would be “accepted” by an imaging system. Such a way is drastically limiting any unnecessary light loss. Our simulations show that the total light intensity gain generated by each part of the pixel is at least 3800% when compared to a typical flat LED. It means that, for the same electrical power consumption, the battery life duration is increased by 38. Furthermore, this improvement significantly decreases the display thermal radiation by at least 300%. Since pixel resolution is critical to offer advanced applications, an extensive feasibility study was performed, using the LightTools software package for ray tracing optimization. In addition to the simulation results, an analytical model was developed. This new device holds the potential to change the efficiency for military, professional and consumer applications, and can serve as a game changer.
Collapse
|
30
|
Using an augmented reality game to teach three junior high school students with intellectual disabilities to improve ATM use. JOURNAL OF APPLIED RESEARCH IN INTELLECTUAL DISABILITIES 2019; 33:409-419. [PMID: 31713985 DOI: 10.1111/jar.12683] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Revised: 10/07/2019] [Accepted: 10/24/2019] [Indexed: 11/28/2022]
Abstract
BACKGROUND Individuals with intellectual disabilities (ID) may have difficulties in performing daily living tasks. Among other daily living tasks, independent automated teller machine (ATM) skills are an essential life skill for people with intellectual disabilities. MATERIALS AND METHODS Three junior high school students in a special education class participated in the experiment. We employed the augmented reality (AR) technology to gamify ATM skill training. Specifically, a multiple baseline design was adopted to demonstrate the relation between game-based intervention and using an ATM independently. RESULTS Data showed that the percentage of correct task steps increased among all three participants. Social validity results showed the teachers considered the AR game was very useful and it had helped their students learn the ATM skills effectively. CONCLUSIONS The proposed AR game can be used for effective training of students with intellectual disabilities using an ATM independently.
Collapse
|
31
|
Unique 4-DOF Relative Pose Estimation with Six Distances for UWB/V-SLAM-Based Devices. SENSORS 2019; 19:s19204366. [PMID: 31601000 PMCID: PMC6832560 DOI: 10.3390/s19204366] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 10/01/2019] [Accepted: 10/04/2019] [Indexed: 11/17/2022]
Abstract
In this work we introduce a relative localization method that estimates the coordinate frame transformation between two devices based on distance measurements. We present a linear algorithm that calculates the relative pose in 2D or 3D with four degrees of freedom (4-DOF). This algorithm needs a minimum of five or six distance measurements, respectively, to estimate the relative pose uniquely. We use the linear algorithm in conjunction with outlier detection algorithms and as a good initial estimate for iterative least squares refinement. The proposed method outperforms other related linear methods in terms of distance measurements needed and in terms of accuracy. In comparison with a related linear algorithm in 2D, we can reduce 10% of the translation error. In contrast to the more general 6-DOF linear algorithm, our 4-DOF method reduces the minimum distances needed from ten to six and the rotation error by a factor of four at the standard deviation of our ultra-wideband (UWB) transponders. When using the same amount of measurements the orientation error and translation error are approximately reduced to a factor of ten. We validate our method with simulations and an experimental setup, where we integrate ultra-wideband (UWB) technology into simultaneous localization and mapping (SLAM)-based devices. The presented relative pose estimation method is intended for use in augmented reality applications for cooperative localization with head-mounted displays. We foresee practical use cases of this method in cooperative SLAM, where map merging is performed in the most proactive manner.
Collapse
|
32
|
The utility of virtual reality and augmented reality in spine surgery. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:S171. [PMID: 31624737 DOI: 10.21037/atm.2019.06.38] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
As the number of advances in surgical techniques increases, it becomes increasingly important to assess and research the technology regarding spine surgery techniques in order to increase surgical accuracy, decrease overall length of surgery, and minimize overall radiation exposure. Currently, augmented reality and virtual reality have shown promising results in regard to their applicability beyond their current functions. At present, VR has been generally applied to a teaching and preparatory role, while AR has been utilized in surgical settings. As such, the following review attempts to provide an overview of both virtual reality and augmented reality, followed by a discussion of their current applications and future direction.
Collapse
|
33
|
Augmented reality and three-dimensional printing in percutaneous interventions on pulmonary arteries. Quant Imaging Med Surg 2019; 9:23-29. [PMID: 30788243 DOI: 10.21037/qims.2018.09.08] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Background Percutaneous pulmonary interventions require extensive and accurate navigation planning and guidance, especially in regard to the three-dimensional (3D) relationships between anatomical structures. In this study, we are demonstrating the feasibility of novel visualization techniques: 3D printing (3DP) and augmented reality (AR) in planning transcatheter pulmonary interventions. Methods Two patients were qualified for balloon pulmonary angioplasty (BPA) for treatment of chronic thromboembolic pulmonary hypertension (CTEPH) and stent implantation for pulmonary artery stenosis, respectively. Computed tomography images of both patients were processed with segmentation algorithms and subsequently submitted to 3D modelling software. Microsoft HoloLens® AR headsets with dedicated CarnaLife Holo® software were utilized to display surface and volume rendering of pulmonary vessels as holograms. Results Personalized life-sized models of the same structures were additionally 3D-printed for preoperative planning. Holograms were shown to physicians throughout the procedure and were used as a guidance and navigation tool. Operative team was able to manipulate the hologram and multiple users of the AR system could share the same image in real time. Clinicians expressed their satisfaction with the quality of imaging and potential clinical benefits. Conclusions This study reports the potential value of AR in pulmonary interventions, however, prospective trials need to be conducted to decide on whether novel 3D visualization techniques affect perioperative treatment and outcomes.
Collapse
|
34
|
An independent shopping experience for wheelchair users through augmented reality and RFID. Assist Technol 2017. [PMID: 28644758 DOI: 10.1080/10400435.2017.1329240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
People with physical and mobility impairments continue to struggle to attain independence in the performance of routine activities and tasks. For example, browsing in a store and interacting with products located beyond an arm's length may be impossible without the enabling intervention of a human assistant. This research article describes a study undertaken to design, develop, and evaluate potential interaction methods for motor-impaired individuals, specifically those who use wheelchairs. Our study includes a user-centered approach, and a categorization of wheelchair users based upon the severity of their disability and their individual needs. We designed and developed access solutions that utilize radio frequency identification (RFID), augmented reality (AR), and touchscreen technologies in order to help people who use wheelchairs to carry out certain tasks autonomously. In this way, they have been empowered to go shopping independently, free from reliance upon the assistance of others. A total of 18 wheelchair users participated in the completed study.
Collapse
|