1
|
Tong W, Shigyo K, Yuan LP, Fan M, Pong TC, Qu H, Xia M. VisTellAR: Embedding Data Visualization to Short-Form Videos Using Mobile Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1862-1874. [PMID: 38427541 DOI: 10.1109/tvcg.2024.3372104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/03/2024]
Abstract
With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.
Collapse
|
2
|
Zhu Q, Lu T, Guo S, Ma X, Yang Y. CompositingVis: Exploring Interactions for Creating Composite Visualizations in Immersive Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:591-601. [PMID: 39250414 DOI: 10.1109/tvcg.2024.3456210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we aim to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience with immersive visualization and has the potential to facilitate understanding of the relationship between visualization views. We begin with developing a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions based on the combination of 3D manipulations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interaction to create different kinds of composite visualizations in virtual reality. Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of creating composite visualizations through embodied interactions. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization views for understanding and communicating the relationships between different views, which underscores the potential of several future application scenarios.
Collapse
|
3
|
In S, Lin T, North C, Pfister H, Yang Y. This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5635-5650. [PMID: 37506003 DOI: 10.1109/tvcg.2023.3299602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.
Collapse
|
4
|
Butcher PWS, Batch A, Saffo D, MacIntyre B, Elmqvist N, Ritsos PD, Rhyne TM. Is Native Naïve? Comparing Native Game Engines and WebXR as Immersive Analytics Development Platforms. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2024; 44:91-98. [PMID: 38905026 DOI: 10.1109/mcg.2024.3367422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2024]
Abstract
Native game engines have long been the 3-D development platform of choice for research in mixed and augmented reality. For this reason, they have also been adopted in many immersive visualization and immersive analytics systems and toolkits. However, with the rapid improvements of WebXR and related open technologies, this choice may not always be optimal for future visualization research. In this article, we investigate common assumptions about native game engines versus WebXR and find that while native engines still have an advantage in many areas, WebXR is rapidly catching up and is superior for many immersive analytics applications.
Collapse
|
5
|
Lehman SM, Elezovikj S, Ling H, Tan CC. ARCHIE++ : A Cloud-Enabled Framework for Conducting AR System Testing in the Wild. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2102-2116. [PMID: 34990364 DOI: 10.1109/tvcg.2022.3141029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this paper, we present ARCHIE++, a testing framework for conducting AR system testing and collecting user feedback in the wild. Our system addresses challenges in AR testing practices by aggregating usability feedback data (collected in situ) with system performance data from that same time period. These data packets can then be leveraged to identify edge cases encountered by testers during unconstrained usage scenarios. We begin by presenting a set of current trends in performing human testing of AR systems, identified by reviewing a selection of recent work from leading conferences in mixed reality, human factors, and mobile and pervasive systems. From the trends, we identify a set of challenges to be faced when attempting to adopt these practices to testing in the wild. These challenges are used to inform the design of our framework, which provides a cloud-enabled and device-agnostic way for AR systems developers to improve their knowledge of environmental conditions and to support scalability and reproducibility when testing in the wild. We then present a series of case studies demonstrating how ARCHIE++ can be used to support a range of AR testing scenarios, and demonstrate the limited overhead of the framework through a series of evaluations. We close with additional discussion on the design and utility of ARCHIE++ under various edge conditions.
Collapse
|
6
|
Li W, Xue Z, Li J, Wang H. The interior environment design for entrepreneurship education under the virtual reality and artificial intelligence-based learning environment. Front Psychol 2022; 13:944060. [PMID: 36438308 PMCID: PMC9683108 DOI: 10.3389/fpsyg.2022.944060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 09/15/2022] [Indexed: 09/08/2024] Open
Abstract
Nowadays, with the rapid growth of artificial intelligence (AI), entrepreneurship education has attracted more and more attention from society. To this end, it is necessary to gradually transform the traditional teaching mode into a new type of teaching that is more innovative, practical, and inclusive and in line with entrepreneurship education. The focus of the teaching mode change is on the optimization of the teaching environment. For this purpose, a method derived from distributed virtual reality (DVR) technology is specially designed. It refers to the fact that multiple users can join together through a computer network and participate in a virtual space at the same time to experience the virtual experience together. Based on this, the distributed 3D interior design is innovatively proposed. The innovation is mainly reflected in the application of VR technology, which is different from traditional software design. According to the functions and needs of the entrepreneurship teaching environment, first, the distributed feature information is collected, and second, the corresponding color image model is constructed by the fusion method, and edge contour detection and corresponding feature data extraction are carried out for the distributed image. Using a Red, Green, and Blue (RGB) color decomposition method, the pixel feature decomposition of spatially distributed image color is performed. And the feature reorganization of the 3D point cloud is combined to optimize the color space and color features of the combined design. On this basis, the distributed 3D interior design system is designed with VR and visual simulation technology. Finally, the Three-Dimensional Studio Max (3ds MAX) is used to establish 3D modeling, and the modeling software Multigen Creator is adopted to carry out the hierarchical structural design. The test results manifest that the Normalized Root Mean Square Error (RMSE) and information saturation of the distributed 3D interior design are reduced by 0.2 compared with the traditional design, the time overhead is shortened to one-sixth of the original, and the effect is more in line with the design requirements. It is hoped that this design method can provide new ideas and new perspectives for the optimization of the entrepreneurship teaching environment.
Collapse
Affiliation(s)
- Wangting Li
- Academy of Arts, Shandong University of Science and Technology, Qingdao, China
| | - Zhijing Xue
- College of Art and Design, Zhengzhou University of Industry Technology, Zhengzhou, China
| | - Jiayi Li
- College of Business, Gachon University, Seongnam, South Korea
| | - Hongkai Wang
- Academy of Arts and Design, Tsinghua University, Beijing, China
- College of Journalism and Communications, Shih Hsin University, Taipei, China
| |
Collapse
|
7
|
Active Learning Activities in a Collaborative Teacher Setting in Colours, Design and Visualisation. COMPUTERS 2022. [DOI: 10.3390/computers11050068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
We present our experience with developing active learning activities in a collaborative teacher setting, along with guidelines for teachers to create them. We focus on developing learner skills in colours, design, and visualisation. Typically, teachers create content before considering learning tasks. In contrast, we develop them concurrently. In addition, teaching in a collaborative setting (where many teachers deliver or produce content) brings its own set of challenges. We developed and used a set of processes to help guide teachers to deliver appropriate learning activities within a theme that appear similarly structured and can be categorised and searched in a consistent way. Our presentation and experience of using these guidelines can act as a blueprint for others to follow and apply. We describe many of the learning activities we created and discuss how we delivered them in a bilingual (English, Welsh) setting. Delivering the learning activities within a theme (in our case, colours) means that it is possible to integrate a range of learning outcomes. Lessons can focus on, for instance, skill development in mathematics, physics, computer graphics, art, design, computer programming, and critical thought. Furthermore, colour is a topic that can motivate: it sparks curiosity and creativity, and people can learn to create their own colourful pictures, while learning and developing computing skills.
Collapse
|
8
|
One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation. COMPUTERS 2022. [DOI: 10.3390/computers11020020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The opportunities for 3D visualisations are huge. People can be immersed inside their data, interface with it in natural ways, and see it in ways that are not possible on a traditional desktop screen. Indeed, 3D visualisations, especially those that are immersed inside head-mounted displays are becoming popular. Much of this growth is driven by the availability, popularity and falling cost of head-mounted displays and other immersive technologies. However, there are also challenges. For example, data visualisation objects can be obscured, important facets missed (perhaps behind the viewer), and the interfaces may be unfamiliar. Some of these challenges are not unique to 3D immersive technologies. Indeed, developers of traditional 2D exploratory visualisation tools would use alternative views, across a multiple coordinated view (MCV) system. Coordinated view interfaces help users explore the richness of the data. For instance, an alphabetical list of people in one view shows everyone in the database, while a map view depicts where they live. Each view provides a different task or purpose. While it is possible to translate some desktop interface techniques into the 3D immersive world, it is not always clear what equivalences would be. In this paper, using several case studies, we discuss the challenges and opportunities for using multiple views in immersive visualisation. Our aim is to provide a set of concepts that will enable developers to perform critical thinking, creative thinking and push the boundaries of what is possible with 3D and immersive visualisation. In summary developers should consider how to integrate many views, techniques and presentation styles, and one view is not enough when using 3D and immersive visualisations.
Collapse
|
9
|
Experimental Performance Evaluation of Enhanced User Interaction Components for Web-Based Collaborative Extended Reality. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11093811] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
COVID-19-related quarantine measures resulted in a significant increase of interest in online collaboration tools. This includes virtual reality (VR) or, in more general term, extended reality (XR) solutions. Shared XR allows for activities such as presentations, training of personnel or therapy to take place in a virtual space instead of a real one. To make online XR as accessible as possible, a significant effort has been put into the development of solutions that can run directly in web browsers. One of the most recognized solutions is the A-Frame software framework, created by Mozilla VR team and supporting most of the contemporary XR hardware. In addition, an extension called Networked-Aframe allows multiple users to share virtual environments, created using A-Frame, in real time. In this article, we introduce and experimentally evaluate three components that extend the functionality of A-Frame and Networked-Aframe. The first one extends Networked-Aframe with the ability to monitor and control users in a shared virtual scene. The second one implements six degrees of freedom motion tracking for smartphone-based VR headsets. The third one brings hand gesture support to the Microsoft HoloLens holographic computer. The evaluation was performed in a dedicated local network environment with 5, 10, 15 and 20 client computers. Each computer represented one user in a shared virtual scene. Since the experiments were carried out with and without the introduced components, the results presented here can also be regarded as a performance evaluation of A-Frame and Networked-Aframe themselves.
Collapse
|