1
|
Pavanatto L, Lu F, North C, Bowman DA. Multiple Monitors or Single Canvas? Evaluating Window Management and Layout Strategies on Virtual Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1713-1730. [PMID: 38386585 DOI: 10.1109/tvcg.2024.3368930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
Virtual displays enabled through head-worn augmented reality have unique characteristics that can yield extensive amounts of screen space. Existing research has shown that increasing the space on a computer screen can enhance usability. Since virtual displays offer the unique ability to present content without rigid physical space constraints, they provide various new design possibilities. Therefore, we must understand the trade-offs of layout choices when structuring that space. We propose a single Canvas approach that eliminates boundaries from traditional multi-monitor approaches and instead places windows in one large, unified space. Our user study compared this approach against a multi-monitor setup, and we considered both purely virtual systems and hybrid systems that included a physical monitor. We looked into usability factors such as performance, accuracy, and overall window management. Results show that Canvas displays can cause users to compact window layouts more than multiple monitors with snapping behavior, even though such optimizations may not lead to longer window management times. We did not find conclusive evidence of either setup providing a better user experience. Multi-Monitor displays offer quick window management with snapping and a structured layout through subdivisions. However, Canvas displays allow for more control in placement and size, lowering the amount of space used and, thus, head rotation. Multi-Monitor benefits were more prominent in the hybrid configuration, while the Canvas display was more beneficial in the purely virtual configuration.
Collapse
|
2
|
Srinivasan A, Ellemose J, Butcher PWS, Ritsos PD, Elmqvist N. Attention-Aware Visualization: Tracking and Responding to User Perception Over Time. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1017-1027. [PMID: 39250380 DOI: 10.1109/tvcg.2024.3456300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
We propose the notion of attention-aware visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization. Such context awareness is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D data-agnostic method for web-based visualizations that can use an embodied eyetracker to capture the user's gaze, and a 3D data-aware one that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a qualitative evaluation studying visual feedback and triggering mechanisms for capturing and revisualizing attention.
Collapse
|
3
|
Manakhov P, Sidenmark L, Pfeuffer K, Gellersen H. Filtering on the Go: Effect of Filters on Gaze Pointing Accuracy During Physical Locomotion in Extended Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7234-7244. [PMID: 39255110 DOI: 10.1109/tvcg.2024.3456153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Eye tracking filters have been shown to improve accuracy of gaze estimation and input for stationary settings. However, their effectiveness during physical movement remains underexplored. In this work, we compare common online filters in the context of physical locomotion in extended reality and propose alterations to improve them for on-the-go settings. We conducted a computational experiment where we simulate performance of the online filters using data on participants attending visual targets located in world-, path-, and two head-based reference frames while standing, walking, and jogging. Our results provide insights into the filters' effectiveness and factors that affect it, such as the amount of noise caused by locomotion and differences in compensatory eye movements, and demonstrate that filters with saccade detection prove most useful for on-the-go settings. We discuss the implications of our findings and conclude with guidance on gaze data filtering for interaction in extended reality.
Collapse
|
4
|
Wang M, Li YJ, Shi J, Steinicke F. SceneFusion: Room-Scale Environmental Fusion for Efficient Traveling Between Separate Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4615-4630. [PMID: 37126613 DOI: 10.1109/tvcg.2023.3271709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Traveling between scenes has become a major requirement for navigation in numerous virtual reality (VR) social platforms and game applications, allowing users to efficiently explore multiple virtual environments (VEs). To facilitate scene transition, prevalent techniques such as instant teleportation and virtual portals have been extensively adopted. However, these techniques exhibit limitations when there is a need for frequent travel between separate VEs, particularly within indoor environments, resulting in low efficiency. In this article, we first analyze the design rationale for a novel navigation method supporting efficient travel between virtual indoor scenes. Based on the analysis, we introduce the SceneFusion technique that fuses separate virtual rooms into an integrated environment. SceneFusion enables users to perceive rich visual information from both rooms simultaneously, achieving high visual continuity and spatial awareness. While existing teleportation techniques passively transport users, SceneFusion allows users to actively access the fused environment using short-range locomotion techniques. User experiments confirmed that SceneFusion outperforms instant teleportation and virtual portal techniques in terms of efficiency, workload, and preference for both single-user exploration and multi-user collaboration tasks in separate VEs. Thus, SceneFusion presents an effective solution for seamless traveling between virtual indoor scenes.
Collapse
|
5
|
Gupta K, Zhang Y, Gunasekaran TS, Krishna N, Pai YS, Billinghurst M. CAEVR: Biosignals-Driven Context-Aware Empathy in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2671-2681. [PMID: 38437090 DOI: 10.1109/tvcg.2024.3372130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
There is little research on how Virtual Reality (VR) applications can identify and respond meaningfully to users' emotional changes. In this paper, we investigate the impact of Context-Aware Empathic VR (CAEVR) on the emotional and cognitive aspects of user experience in VR. We developed a real-time emotion prediction model using electroencephalography (EEG), electrodermal activity (EDA), and heart rate variability (HRV) and used this in personalized and generalized models for emotion recognition. We then explored the application of this model in a context-aware empathic (CAE) virtual agent and an emotion-adaptive (EA) VR environment. We found a significant increase in positive emotions, cognitive load, and empathy toward the CAE agent, suggesting the potential of CAEVR environments to refine user-agent interactions. We identify lessons learned from this study and directions for future work.
Collapse
|
6
|
Chen Z, Chiappalupi D, Lin T, Yang Y, Beyer J, Pfister H. RL-LABEL: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:1347-1357. [PMID: 37871050 DOI: 10.1109/tvcg.2023.3326568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-LABEL, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-LABEL considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-LABEL in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-LABEL excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes.
Collapse
|
7
|
Pillette L, Moreau G, Normand JM, Perrier M, Lecuyer A, Cogne M. A Systematic Review of Navigation Assistance Systems for People With Dementia. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2146-2165. [PMID: 35007194 DOI: 10.1109/tvcg.2022.3141383] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Technological developments provide solutions to alleviate the tremendous impact on the health and autonomy due to the impact of dementia on navigation abilities. We systematically reviewed the literature on devices tested to provide assistance to people with dementia during indoor, outdoor and virtual navigation (PROSPERO ID number: 215585). Medline and Scopus databases were searched from inception. Our aim was to summarize the results from the literature to guide future developments. Twenty-three articles were included in our study. Three types of information were extracted from these studies. First, the types of navigation advice the devices provided were assessed through: (i) the sensorial modality of presentation, e.g., visual and tactile stimuli, (ii) the navigation content, e.g., landmarks, and (iii) the timing of presentation, e.g., systematically at intersections. Second, we analyzed the technology that the devices were based on, e.g., smartphone. Third, the experimental methodology used to assess the devices and the navigation outcome was evaluated. We report and discuss the results from the literature based on these three main characteristics. Finally, based on these considerations, recommendations are drawn, challenges are identified and potential solutions are suggested. Augmented reality-based devices, intelligent tutoring systems and social support should particularly further be explored.
Collapse
|
8
|
Simulating Wearable Urban Augmented Reality Experiences in VR: Lessons Learnt from Designing Two Future Urban Interfaces. MULTIMODAL TECHNOLOGIES AND INTERACTION 2023. [DOI: 10.3390/mti7020021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023] Open
Abstract
Augmented reality (AR) has the potential to fundamentally change how people engage with increasingly interactive urban environments. However, many challenges exist in designing and evaluating these new urban AR experiences, such as technical constraints and safety concerns associated with outdoor AR. We contribute to this domain by assessing the use of virtual reality (VR) for simulating wearable urban AR experiences, allowing participants to interact with future AR interfaces in a realistic, safe and controlled setting. This paper describes two wearable urban AR applications (pedestrian navigation and autonomous mobility) simulated in VR. Based on a thematic analysis of interview data collected across the two studies, we find that the VR simulation successfully elicited feedback on the functional benefits of AR concepts and the potential impact of urban contextual factors, such as safety concerns, attentional capacity, and social considerations. At the same time, we highlight the limitations of this approach in terms of assessing the AR interface’s visual quality and providing exhaustive contextual information. The paper concludes with recommendations for simulating wearable urban AR experiences in VR.
Collapse
|
9
|
Marques B, Silva S, Alves J, Araujo T, Dias P, Santos BS. A Conceptual Model and Taxonomy for Collaborative Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:5113-5133. [PMID: 34347599 DOI: 10.1109/tvcg.2021.3101545] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
To support the nuances of collaborative work, many researchers have been exploring the field of Augmented Reality (AR), aiming to assist in co-located or remote scenarios. Solutions using AR allow taking advantage from seamless integration of virtual objects and real-world objects, thus providing collaborators with a shared understanding or common ground environment. However, most of the research efforts, so far, have been devoted to experiment with technology and mature methods to support its design and development. Therefore, it is now time to understand where the field stands and how well can it address collaborative work with AR, to better characterize and evaluate the collaboration process. In this article, we perform an analysis of the different dimensions that should be taken into account when analysing the contributions of AR to the collaborative work effort. Then, we bring these dimensions forward into a conceptual framework and propose an extended human-centered taxonomy for the categorization of the main features of Collaborative AR. Our goal is to foster harmonization of perspectives for the field, which may help create a common ground for systematization and discussion. We hope to influence and improve how research in this field is reported by providing a structured list of the defining characteristics. Finally, some examples of the use of the taxonomy are presented to show how it can serve to gather information for characterizing AR-supported collaborative work, and illustrate its potential as the grounds to elicit further studies.
Collapse
|
10
|
Maio R, Marques B, Alves J, Santos BS, Dias P, Lau N. An Augmented Reality Serious Game for Learning Intelligent Wheelchair Control: Comparing Configuration and Tracking Methods. SENSORS (BASEL, SWITZERLAND) 2022; 22:7788. [PMID: 36298139 PMCID: PMC9610184 DOI: 10.3390/s22207788] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/04/2022] [Accepted: 10/08/2022] [Indexed: 06/16/2023]
Abstract
This work proposes an augmented reality serious game (ARSG) for supporting individuals with motor disabilities while controlling robotic wheelchairs. A racing track was used as the game narrative; this included restriction areas, static and dynamic virtual objects, as well as obstacles and signs. To experience the game, a prior configuration of the environment, made through a smartphone or a computer, was required. Furthermore, a visualization tool was developed to exhibit user performance while using the ARSG. Two user studies were conducted with 10 and 20 participants, respectively, to compare (1) how different devices enable configuring the ARSG, and (2) different tracking capabilities, i.e., methods used to place virtual content on the real-world environment while the user interacts with the game and controls the wheelchair in the physical space: C1-motion tracking using cloud anchors; C2-offline motion tracking. Results suggest that configuring the environment with the computer is more efficient and accurate, in contrast to the smartphone, which is characterized as more engaging. In addition, condition C1 stood out as more accurate and robust, while condition C2 appeared to be easier to use.
Collapse
Affiliation(s)
- Rafael Maio
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Bernardo Marques
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
- DigiMedia, DeCA, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - João Alves
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Beatriz Sousa Santos
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Paulo Dias
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Nuno Lau
- IEETA, DETI, Campus Universitário de Santiago, University of Aveiro, 3810-193 Aveiro, Portugal
| |
Collapse
|
11
|
Raeburn G, Welton M, Tokarchuk L. Developing a play-anywhere handheld AR storytelling app using remote data collection. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.927177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Immersive story experiences like immersive theater productions and escape rooms have grown in popularity in recent years, offering the audience a more active role in the events portrayed. However, many of these activities were forced to close at the start of the COVID-19 pandemic, arising from restrictions placed on group activities and travel. This created an opportunity for a story experience that users could take part in around their local neighborhoods. Five mobile applications (apps) were developed toward this goal, aiming to make effective use of available local map data, alongside virtual content overlaid on users' surroundings through Augmented Reality (AR), to offer additional story features not present in the real environment. The first two apps investigated the feasibility of such an approach, including the remote field testing of the apps, where participants used their own devices across a variety of locations. Two follow-up apps further aimed to offer an improved user experience, also adopting a more standardized testing procedure, to better ensure each app was completed in an intended manner by those participating remotely. Participants rated their experience through immersion and engagement questionnaire factors that tested for their appropriateness to rate such experiences, in addition to providing their feedback. A final app applied the same AR story implementation to a curated site-specific study, once pandemic restrictions had eased. This combination of remote studies and subsequent curated study offered a reverse methodology to much previous research in this field, but was found to offer advantages in corroborating the results of the remote studies, and also in offering new insights to further improve such an AR story app, that is designed to be used at an outdoor location of the user's choosing. Such an app offers benefits to those who may prefer the opportunity to take part in such an activity solo or close to home, as well as for storytellers to develop an outside story for use at a variety of locations, making it available to a larger audience, without the challenges and costs in migrating it to different locations.
Collapse
|
12
|
Xia G, Xue P, Sun H, Sun Y, Zhang D, Liu Q. Local Self-Expression Subspace Learning Network for Motion Capture Data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4869-4883. [PMID: 35839181 DOI: 10.1109/tip.2022.3189822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Deep subspace learning is an important branch of self-supervised learning and has been a hot research topic in recent years, but current methods do not fully consider the individualities of temporal data and related tasks. In this paper, by transforming the individualities of motion capture data and segmentation task as the supervision, we propose the local self-expression subspace learning network. Specifically, considering the temporality of motion data, we use the temporal convolution module to extract temporal features. To implement the local validity of self-expression in temporal tasks, we design the local self-expression layer which only maintains the representation relations with temporally adjacent motion frames. To simulate the interpolatability of motion data in the feature space, we impose a group sparseness constraint on the local self-expression layer to impel the representations only using selected keyframes. Besides, based on the subspace assumption, we propose the subspace projection loss, which is induced from distances of each frame projected to the fitted subspaces, to penalize the potential clustering errors. The superior performances of the proposed model on the segmentation task of synthetic data and three tasks of real motion capture data demonstrate the feature learning ability of our model.
Collapse
|
13
|
Asokan DR, Huq FA, Smith CM, Stevenson M. Socially responsible operations in the Industry 4.0 era: post-COVID-19 technology adoption and perspectives on future research. INTERNATIONAL JOURNAL OF OPERATIONS & PRODUCTION MANAGEMENT 2022. [DOI: 10.1108/ijopm-01-2022-0069] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeAs focal firms in supply networks reflect on their experiences of the pandemic and begin to rethink their operations and supply chains, there is a significant opportunity to leverage digital technological advances to enhance socially responsible operations performance (SROP). This paper develops a novel framework for exploring the adoption of Industry 4.0 technologies for improving SROP. It highlights current best-practice examples and presents future research pathways.Design/methodology/approachThis viewpoint paper argues how Industry 4.0 technology adoption can enable effective SROP in the post-COVID-19 era. Academic articles, relevant grey literature, and insights from industry experts are used to support the development of the framework.FindingsSeven technologies are identified that bring transformational capabilities to SROP, i.e. big data analytics, digital twins, augmented reality, blockchain, 3D printing, artificial intelligence, and the Internet of Things. It is demonstrated how these technologies can help to improve three sub-themes of organisational social performance (employment practices, health and safety, and business practices) and three sub-themes of community social performance (quality of life and social welfare, social governance, and economic welfare and growth).Research limitations/implicationsA research agenda is outlined at the intersection of Industry 4.0 and SROP through the six sub-themes of organisational and community social performance. Further, these are connected through three overarching research agendas: “Trust through Technology”, “Responsible Relationships” and “Freedom through Flexibility”.Practical implicationsOrganisational agendas for Industry 4.0 and social responsibility can be complementary. The framework provides insights into how Industry 4.0 technologies can help firms achieve long-term post-COVID-19 recovery, with an emphasis on SROP. This can offer firms competitive advantage in the “new normal” by helping them build back better.Social implicationsPeople and communities should be at the heart of decisions about rethinking operations and supply chains. This paper expresses a view on what it entails for organisations to be responsible for the supply chain-wide social wellbeing of employees and the wider community they operate in, and how they can use technology to embed social responsibility in their operations and supply chains.Originality/valueContributes to the limited understanding of how Industry 4.0 technologies can lead to socially responsible transformations. A novel framework integrating SROP and Industry 4.0 is presented.
Collapse
|
14
|
Lorenzo G, Gilabert Cerdá A, Lorenzo-Lledó A, Lledó A. The application of augmented reality in the learning of autistic students: a systematic and thematic review in 1996–2020. JOURNAL OF ENABLING TECHNOLOGIES 2022. [DOI: 10.1108/jet-12-2021-0068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeMore and more diversity is present in our classrooms. As teachers, we must be able to respond to the different levels of learning presented by our students. Therefore, it is necessary to use the new emerging technologies as elements of response. Thus, the purpose of this paper is to develop a systematic and thematic review of the application of augmented reality (AR) in the learning of autistic students in the educational setting during the period 1996–2020 using the Web of Science and Scopus databases.Design/methodology/approachFor this purpose, one of the bibliometric techniques called systematic and thematic review has been used. This technique is supported by the preferred reporting items for systematic reviews methodology, and it uses a quantitative and qualitative approach. The thematic analysis will be carried out on 28 documents based on a series of indicators, including sample size, hardware devices, way of storing the information and findings obtained in the research.FindingsThe results of the work indicate that the average size of the sample is three participants, and that the most worked area has been social skills using tablets. In addition, bookmarks are often used as an element of information storage in AR.Originality/valueThe main contribution of this work focuses on the establishment of a series of thematic variables that will serve for the later development of an action protocol for the creation of AR activities for autistic students.
Collapse
|
15
|
Tran TTM, Parker C, Wang Y, Tomitsch M. Designing Wearable Augmented Reality Concepts to Support Scalability in Autonomous Vehicle-Pedestrian Interaction. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.866516] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Wearable augmented reality (AR) offers new ways for supporting the interaction between autonomous vehicles (AVs) and pedestrians due to its ability to integrate timely and contextually relevant data into the user's field of view. This article presents novel wearable AR concepts that assist crossing pedestrians in multi-vehicle scenarios where several AVs frequent the road from both directions. Three concepts with different communication approaches for signaling responses from multiple AVs to a crossing request, as well as a conventional pedestrian push button, were simulated and tested within a virtual reality environment. The results showed that wearable AR is a promising way to reduce crossing pedestrians' cognitive load when the design offers both individual AV responses and a clear signal to cross. The willingness of pedestrians to adopt a wearable AR solution, however, is subject to different factors, including costs, data privacy, technical defects, liability risks, maintenance duties, and form factors. We further found that all participants favored sending a crossing request to AVs rather than waiting for the vehicles to detect their intentions—pointing to an important gap and opportunity in the current AV-pedestrian interaction literature.
Collapse
|
16
|
Comparing Desktop vs. Mobile Interaction for the Creation of Pervasive Augmented Reality Experiences. J Imaging 2022; 8:jimaging8030079. [PMID: 35324634 PMCID: PMC8949857 DOI: 10.3390/jimaging8030079] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 03/08/2022] [Accepted: 03/16/2022] [Indexed: 02/04/2023] Open
Abstract
This paper presents an evaluation and comparison of interaction methods for the configuration and visualization of pervasive Augmented Reality (AR) experiences using two different platforms: desktop and mobile. AR experiences consist of the enhancement of real-world environments by superimposing additional layers of information, real-time interaction, and accurate 3D registration of virtual and real objects. Pervasive AR extends this concept through experiences that are continuous in space, being aware of and responsive to the user’s context and pose. Currently, the time and technical expertise required to create such applications are the main reasons preventing its widespread use. As such, authoring tools which facilitate the development and configuration of pervasive AR experiences have become progressively more relevant. Their operation often involves the navigation of the real-world scene and the use of the AR equipment itself to add the augmented information within the environment. The proposed experimental tool makes use of 3D scans from physical environments to provide a reconstructed digital replica of such spaces for a desktop-based method, and to enable positional tracking for a mobile-based one. While the desktop platform represents a non-immersive setting, the mobile one provides continuous AR in the physical environment. Both versions can be used to place virtual content and ultimately configure an AR experience. The authoring capabilities of the different platforms were compared by conducting a user study focused on evaluating their usability. Although the AR interface was generally considered more intuitive, the desktop platform shows promise in several aspects, such as remote configuration, lower required effort, and overall better scalability.
Collapse
|
17
|
Roopa D, Bose S. A Rapid Dual Feature Tracking Method for Medical Equipments Assembly and Disassembly in Markerless Augmented Reality. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Markerless Augmented Reality (MAR) is a superior technology that is currently used by the medical device assembler with aid in design, assembly, disassembly and maintenance operations. The medical assembler assembles the medical equipment based on the doctors requirement, they also
maintains quality and sanitation of the equipment. The major research challenges in MAR are as follows: establish automatic registration parts, find and track the orientation of parts, and lack of depth and visual features. This work proposes a rapid dual feature tracking method i.e., combination
of Visual Simultaneous Localization and Mapping (SLAM) and Matched Pairs Selection (MAPSEL). The main idea of this work is to attain high tracking accuracy using the combined method. To get a good depth image map, a Graph-Based Joint Bilateral with Sharpening Filter (GRB-JBF with SF) is proposed
since depth images are noisy due to the dynamic change of environmental factors that affects tracking accuracy. Then, the best feature points are obtained for matching using Oriented Fast and Rotated Brief (ORB) as a feature detector, Fast Retina Key point with Histogram of Gradients (FREAK-HoG)
as a feature descriptor, and Feature Matching using Rajsk’s distance. Finally, the virtual object is rendered based on 3D affine and projection transformation. This work computes the performance in terms of tracking accuracy, tracking time, and rotation error for different distances
using MATLAB R2017b. From the observed results, it is perceived that the proposed method attained the least position error value about 0.1 cm to 0.3 cm. Also, rotation error is observed as minimal between 2.40 (Deg) to 3.10 and its average scale is observed as 2.7140. Further, the proposed
combination consumes less time against frames compared with other combinations and obtained a higher tracking accuracy of about 95.14% for 180 tracked points. The witnessed outcomes from the proposed scheme display superior performance compared with existing methods.
Collapse
Affiliation(s)
- D. Roopa
- Department of Computer Science and Engineering, Sri Sai Ram Institute of Technology, Anna University, Chennai, 600044, Tamil Nadu, India
| | - S. Bose
- Department of Computer Science and Engineering, Anna University, Chennai, 600025, Tamil Nadu, India
| |
Collapse
|
18
|
Arowoiya VA, Oke AE, Akanni PO, Kwofie TE, Enih PI. Augmented reality for construction revolution – analysis of critical success factors. INTERNATIONAL JOURNAL OF CONSTRUCTION MANAGEMENT 2021. [DOI: 10.1080/15623599.2021.2017542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
| | - Ayodeji Emmanuel Oke
- Department of Quantity Surveying, Federal University of Technology Akure, Akure, Nigeria
| | | | - Titus Ebenezer Kwofie
- Department of Architecture, Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana
| | | |
Collapse
|
19
|
Chen L, Chen P, Zhao S, Luo Z, Chen W, Pei Y, Zhao H, Jiang J, Xu M, Yan Y, Yin E. Adaptive asynchronous control system of robotic arm based on augmented reality-assisted brain-computer interface. J Neural Eng 2021; 18. [PMID: 34654000 DOI: 10.1088/1741-2552/ac3044] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 10/15/2021] [Indexed: 11/12/2022]
Abstract
Objective. Brain-controlled robotic arms have shown broad application prospects with the development of robotics, science and information decoding. However, disadvantages, such as poor flexibility restrict its wide application.Approach. In order to alleviate these drawbacks, this study proposed a robotic arm asynchronous control system based on steady-state visual evoked potential (SSVEP) in an augmented reality (AR) environment. In the AR environment, the participants were able to concurrently see the robot arm and visual stimulation interface through the AR device. Therefore, there was no need to switch attention frequently between the visual stimulation interface and the robotic arm. This study proposed a multi-template algorithm based on canonical correlation analysis and task-related component analysis to identify 12 targets. An optimization strategy based on dynamic window was adopted to adjust the duration of visual stimulation adaptively.Main results. Experimental results of this study found that the high-frequency SSVEP-based brain-computer interface (BCI) realized the switch of the system state, which controlled the robotic arm asynchronously. The average accuracy of the offline experiment was 94.97%, whereas the average information translate rate was 67.37 ± 14.27 bits·min-1. The online results from ten healthy subjects showed that the average selection time of a single online command was 2.04 s, which effectively reduced the visual fatigue of the subjects. Each subject could quickly complete the puzzle task.Significance. The experimental results demonstrated the feasibility and potential of this human-computer interaction strategy and provided new ideas for BCI-controlled robots.
Collapse
Affiliation(s)
- Lingling Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300130, People's Republic of China.,Engineering Research Center of Intelligent Rehabilitation Device and Detection Technology Ministry of Education, Tianjin 300130, People's Republic of China
| | - Pengfei Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300130, People's Republic of China.,Engineering Research Center of Intelligent Rehabilitation Device and Detection Technology Ministry of Education, Tianjin 300130, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Shaokai Zhao
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Zhiguo Luo
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Wei Chen
- National Research Center for Rehabilitation Technical Aids, Beijing 100176, People's Republic of China
| | - Yu Pei
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Hongyu Zhao
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China.,East China University of Science and Technology, Shanghai 200237, People's Republic of China
| | - Jing Jiang
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing 100094, People's Republic of China
| | - Minpeng Xu
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China.,Tianjin University, Tianjin 300072, People's Republic of China
| | - Ye Yan
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Erwei Yin
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| |
Collapse
|
20
|
Xia G, Xue P, Zhang D, Liu Q. Likelihood-constrained coupled space learning for motion synthesis. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
21
|
CultReal-A Rapid Development Platform for AR Cultural Spaces, with Fused Localization. SENSORS 2021; 21:s21196618. [PMID: 34640937 PMCID: PMC8513013 DOI: 10.3390/s21196618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 09/27/2021] [Accepted: 09/29/2021] [Indexed: 11/30/2022]
Abstract
Virtual and augmented reality technologies have known an impressive market evolution due to their potential to provide immersive experiences. However, they still have significant difficulties to enable fully fledged, consumer-ready applications that can handle complex tasks such as multi-user collaboration or time-persistent experiences. In this context, CultReal is a rapid creation and deployment platform for augmented reality (AR), aiming to revitalize cultural spaces. The platform’s content management system stores a representation of the environment, together with a database of multimedia objects that can be associated with a location. The localization component fuses data from beacons and from video cameras, providing an accurate estimation of the position and orientation of the visitor’s smartphone. A mobile application running the localization component displays the augmented content, which is seamlessly integrated with the real world. The paper focuses on the series of steps required to compute the position and orientation of the user’s mobile device, providing a comprehensive evaluation with both virtual and real data. Pilot implementations of the system are also described in the paper, revealing the potential of the platform to enable rapid deployment in new cultural spaces. Offering these functionalities, CultReal will allow for the fast development of AR solutions in any location.
Collapse
|
22
|
Smith M, Gabbard JL, Burnett G, Hare C, Singh H, Skrypchuk L. Determining the impact of augmented reality graphic spatial location and motion on driver behaviors. APPLIED ERGONOMICS 2021; 96:103510. [PMID: 34161853 DOI: 10.1016/j.apergo.2021.103510] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 06/11/2021] [Accepted: 06/12/2021] [Indexed: 06/13/2023]
Abstract
While researchers have explored benefits of adding augmented reality graphics to vehicle displays, the impact of graphic characteristics have not been well researched. In this paper, we consider the impact of augmented reality graphic spatial location and motion, as well as turn direction, traffic presence, and gender, on participant driving and glance behavior and preferences. Twenty-two participants navigated through a simulated environment while using four different graphics. We employed a novel glance allocation analysis to differentiate information likely gathered with each glace with more granularity. Fixed graphics generally resulted in less visual attention and more time scanning for hazards than animated graphics. Finally, the screen-fixed graphic was preferred by participants over all world-relative graphics, suggesting that graphic spatially integration into the world may not always be necessary in visually complex urban environments like those considered in this study.
Collapse
|
23
|
von Terzi P, Tretter S, Uhde A, Hassenzahl M, Diefenbach S. Technology-Mediated Experiences and Social Context: Relevant Needs in Private Vs. Public Interaction and the Importance of Others for Positive Affect. Front Psychol 2021; 12:718315. [PMID: 34539519 PMCID: PMC8440849 DOI: 10.3389/fpsyg.2021.718315] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 08/05/2021] [Indexed: 01/07/2023] Open
Abstract
Technologies, such as smartphones or wearables, take a central role in our daily lives. Making their use meaningful and enjoyable requires a better understanding of the prerequisites and underpinnings of positive experiences with such technologies. So far, a focus had been on the users themselves, that is, their individual goals, desires, feelings, and acceptance. However, technology is often used in a social context, observed by others or even used in interaction with others, and thus shapes social dynamics considerably. In the present paper, we start from the notion that meaningful and/or enjoyable experiences (i.e., wellbeing) are a major outcome of technology use. We investigate how these experiences are further shaped by social context, such as potential spectators. More specifically, we gathered private (while being alone) and public (while other people are present) positive experiences with technology and compared need fulfillment and affective experience. In addition, we asked participants to imagine a change in context (from private to public or public to private) and to report the impact of this change on experience. Results support the idea of particular social needs, such as relatedness and popularity, which are especially relevant and better fulfilled in public than in private contexts. Moreover, our findings show that participants experience less positive affect when imaginatively removing the present others from a formerly public interaction, i.e., when they imagine performing the same interaction but without the other people present. Overall, this underlines the importance of social context for Human-Computer Interaction practice and research. Practical implications relate to product development, e.g., designing interactive technologies that can adapt to context (changes) or allow for context-sensitive interaction sets. We discuss limitations related to the experimental exploration of social context, such as the method of data collection, as well as potential alternatives to address those limitations, such as diary studies.
Collapse
Affiliation(s)
- Pia von Terzi
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Stefan Tretter
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Alarith Uhde
- Ubiquitous Design Experience and Interaction, Universität Siegen, Siegen, Germany
| | - Marc Hassenzahl
- Ubiquitous Design Experience and Interaction, Universität Siegen, Siegen, Germany
| | - Sarah Diefenbach
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
24
|
Zollmann S, Langlotz T, Grasset R, Lo WH, Mori S, Regenbrecht H. Visualization Techniques in Augmented Reality: A Taxonomy, Methods and Patterns. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3808-3825. [PMID: 32275601 DOI: 10.1109/tvcg.2020.2986247] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, the development of Augmented Reality (AR) frameworks made AR application development widely accessible to developers without AR expert background. With this development, new application fields for AR are on the rise. This comes with an increased need for visualization techniques that are suitable for a wide range of application areas. It becomes more important for a wider audience to gain a better understanding of existing AR visualization techniques. In this article we provide a taxonomy of existing works on visualization techniques in AR. The taxonomy aims to give researchers and developers without an in-depth background in Augmented Reality the information to successively apply visualization techniques in Augmented Reality environments. We also describe required components and methods and analyze common patterns.
Collapse
|
25
|
Zhang H, Sun Q, Liu Z. Augmented reality display of neurosurgery craniotomy lesions based on feature contour matching. COGNITIVE COMPUTATION AND SYSTEMS 2021. [DOI: 10.1049/ccs2.12021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Hao Zhang
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control School of Mechanical Engineering Tianjin University of Technology Tianjin China
- National Demonstration Center for Experimental Mechanical and Electrical Engineering Education Tianjin University of Technology Tianjin China
| | - Qi‐Yuan Sun
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control School of Mechanical Engineering Tianjin University of Technology Tianjin China
- National Demonstration Center for Experimental Mechanical and Electrical Engineering Education Tianjin University of Technology Tianjin China
| | - Zhen‐Zhong Liu
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control School of Mechanical Engineering Tianjin University of Technology Tianjin China
- National Demonstration Center for Experimental Mechanical and Electrical Engineering Education Tianjin University of Technology Tianjin China
| |
Collapse
|
26
|
Integration of BIM and Immersive Technologies for AEC: A Scientometric-SWOT Analysis and Critical Content Review. BUILDINGS 2021. [DOI: 10.3390/buildings11030126] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
With the outset of Industrial Revolution 4.0 (IR 4.0), every sector is escalating to get enrichment out of it, whether they are research- or industry-oriented. The Architecture Engineering and Construction (AEC) industry lags a bit in adopting it because of its multi-faceted dependencies and unique nature of work. Despite this, a trend has been seen recently to hone the IR 4.0 multitudes in the AEC industry. The upsurge has been seen in the usage of Immersive Technologies (ImTs) as one of the disruptive techniques. This paper studies the literature based on ImTs, which are Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) integrating with Building Information Modelling (BIM) in the AEC sector. A total number of 444 articles were selected from Scopus following the preferred reporting items for systematic reviews and meta-analysis (PRISMA) protocol of reviewing the literature. Among the selected database, 64 papers are identified as the result of following the protocol, and the articles are divided into eight domains relevant to the AEC industry, namely client/stakeholder, design exploration, design analysis, construction planning, construction monitoring, construction health/safety, facility/management, and education/training. This study adopts both a scientometric analysis for bibliometrics visualization and a critical review using Strength Weakness Opportunity Threat (SWOT) analysis for finding gaps and state of play. The novelty of this paper lies in the analysis techniques used in the literature to provide an insight into the literature, and it provides directions for the future with an emphasis on developing sustainable development goals (SDGs). In addition, research directions for the future growth on the adoption of ImTs are identified and presented based on categorization in immersive devices, graphical/non-graphical data and, responsive/integrative processes. In addition, five subcategories for each direction are listed, citing the limitations and future/needs. This study presents the roadmap for the successful adoption of ImTs for industry practitioners and stakeholders in the AEC industry for various domains. The paper shows that there are studies on ImTs with or without BIM; however, future studies should focus on the usage of ImTs in various sectors such as modular integrated construction (MiC) or emerging needs such as SDGs.
Collapse
|
27
|
Itoh Y, Langlotz T, Zollmann S, Iwai D, Kiyoshi K, Amano T. Computational Phase-Modulated Eyeglasses. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1916-1928. [PMID: 31613772 DOI: 10.1109/tvcg.2019.2947038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present computational phase-modulated eyeglasses, a see-through optical system that modulates the view of the user using phase-only spatial light modulators (PSLM). A PSLM is a programmable reflective device that can selectively retardate, or delay, the incoming light rays. As a result, a PSLM works as a computational dynamic lens device. We demonstrate our computational phase-modulated eyeglasses with either a single PSLM or dual PSLMs and show that the concept can realize various optical operations including focus correction, bi-focus, image shift, and field of view manipulation, namely optical zoom. Compared to other programmable optics, computational phase-modulated eyeglasses have the advantage in terms of its versatility. In addition, we also presents some prototypical focus-loop applications where the lens is dynamically optimized based on distances of objects observed by a scene camera. We further discuss the implementation, applications but also discuss limitations of the current prototypes and remaining issues that need to be addressed in future research.
Collapse
|
28
|
Aromaa S, Väätänen A, Aaltonen I, Goriachev V, Helin K, Karjalainen J. Awareness of the real-world environment when using augmented reality head-mounted display. APPLIED ERGONOMICS 2020; 88:103145. [PMID: 32421637 DOI: 10.1016/j.apergo.2020.103145] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 04/27/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Augmented reality (AR) systems are becoming common tools in industrial workplaces. However, factory workers are still concerned about whether head-mounted display (HMD)-based AR systems distract their awareness of the environment and therefore pose safety risks. The purpose of this study was to assess users' experience of real-world awareness when using an AR system. 19 study participants played a wooden block logic game in a laboratory with three different setups: real, AR and virtual reality (VR). Based on this study, it can be concluded that HMD-based AR systems do not decrease users' awareness of their surroundings if the virtual content is minimal and the task is done while seated. However, it was seen that more research in this area with more interactive virtual content is required. This study is an important step in understanding how AR may affect future work in industrial and safety-critical environments.
Collapse
Affiliation(s)
- Susanna Aromaa
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland.
| | - Antti Väätänen
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| | - Iina Aaltonen
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| | - Vladimir Goriachev
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| | - Kaj Helin
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| | - Jaakko Karjalainen
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| |
Collapse
|
29
|
A systematic design method of adaptive augmented reality work instruction for complex industrial operations. COMPUT IND 2020. [DOI: 10.1016/j.compind.2020.103229] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
30
|
Towards Next Generation Technical Documentation in Augmented Reality Using a Context-Aware Information Manager. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10030780] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Technical documentation is evolving from static contents presented on paper or via digital publishing to real-time on-demand contents displayed via virtual and augmented reality (AR) devices. However, how best to provide personalized and context-relevant presentation of technical information is still an open field of research. In particular, the systems described in the literature can manage a limited number of modalities to convey technical information, and do not consider the ‘people’ factor. Then, in this work, we present a Context-Aware Technical Information Management (CATIM) system, that dynamically manages (1) what information as well as (2) how information is presented in an augmented reality interface. The system was successfully implemented, and we made a first evaluation in the real industrial scenario of the maintenance of a hydraulic valve. We also measured the time performance of the system, and results revealed that CATIM performs fast enough to support interactive AR.
Collapse
|
31
|
BIM-based and AR Application Combined with Location-Based Management System for the Improvement of the Construction Performance. BUILDINGS 2019. [DOI: 10.3390/buildings9050118] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The information and communication technologies (ICTs) utilization ratio in the construction industry is relatively low. This industry is characterized by low productivity, time and cost overruns in projectsdue to inefficient management processes, poor communication and low process automation. To improve construction performance, a BIM-based (BIM - (Building Information Modelling) and augmented reality (AR) application (referred to as the AR4C: Augmented Reality for Construction) is proposed, which integrates a location-based management system (LBMS). The application provides context-specific information on construction projects and tasks, as well as key performance indicators on the progress and performance of construction tasks. The construction projects are superimposed onto the real world, while a site manager is walking through the construction site. This paper describes the most important methods and technologies, which are needed to develop the AR4C application. In particular, the data exchange between BIM software and the Unity environment is discussed, as well as the integration of LBMS into BIM software and the AR4C application. Finally, the implemented and planned functionalities are argued. The AR4C application prototype was tested in a laboratory environment and produced positive feedback. Since the application addresses construction sites, a validation in semi-real scenarios with end users is recommended.
Collapse
|
32
|
Aguilar J, Jerez M, Rodríguez T. CAMeOnto: Context awareness meta ontology modeling. APPLIED COMPUTING AND INFORMATICS 2018. [DOI: 10.1016/j.aci.2017.08.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
33
|
Langlotz T, Cook M, Regenbrecht H. Real-Time Radiometric Compensation for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:2385-2394. [PMID: 27479973 DOI: 10.1109/tvcg.2016.2593781] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Optical see-through head-mounted displays are currently seeing a transition out of research labs towards the consumer-oriented market. However, whilst availability has improved and prices have decreased, the technology has not matured much. Most commercially available optical see-through head mounted displays follow a similar principle and use an optical combiner blending the physical environment with digital information. This approach yields problems as the colors for the overlaid digital information can not be correctly reproduced. The perceived pixel colors are always a result of the displayed pixel color and the color of the current physical environment seen through the head-mounted display. In this paper we present an initial approach for mitigating the effect of color-blending in optical see-through head-mounted displays by introducing a real-time radiometric compensation. Our approach is based on a novel prototype for an optical see-through head-mounted display that allows the capture of the current environment as seen by the user's eye. We present three different algorithms using this prototype to compensate color blending in real-time and with pixel-accuracy. We demonstrate the benefits and performance as well as the results of a user study. We see application for all common Augmented Reality scenarios but also for other areas such as Diminished Reality or supporting color-blind people.
Collapse
|