1
|
Lin T, Yang Y, Beyer J, Pfister H. Labeling Out-of-View Objects in Immersive Analytics to Support Situated Visual Searching. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1831-1844. [PMID: 34882554 DOI: 10.1109/tvcg.2021.3133511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Augmented Reality (AR) embeds digital information into objects of the physical world. Data can be shown in-situ, thereby enabling real-time visual comparisons and object search in real-life user tasks, such as comparing products and looking up scores in a sports game. While there have been studies on designing AR interfaces for situated information retrieval, there has only been limited research on AR object labeling for visual search tasks in the spatial environment. In this article, we identify and categorize different design aspects in AR label design and report on a formal user study on labels for out-of-view objects to support visual search tasks in AR. We design three visualization techniques for out-of-view object labeling in AR, which respectively encode the relative physical position (height-encoded), the rotational direction (angle-encoded), and the label values (value-encoded) of the objects. We further implement two traditional in-view object labeling techniques, where labels are placed either next to the respective objects (situated) or at the edge of the AR FoV (boundary). We evaluate these five different label conditions in three visual search tasks for static objects. Our study shows that out-of-view object labels are beneficial when searching for objects outside the FoV, spatial orientation, and when comparing multiple spatially sparse objects. Angle-encoded labels with directional cues of the surrounding objects have the overall best performance with the highest user satisfaction. We discuss the implications of our findings for future immersive AR interface design.
Collapse
|
2
|
Wen Z, Zeng W, Weng L, Liu Y, Xu M, Chen W. Effects of View Layout on Situated Analytics for Multiple-View Representations in Immersive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:440-450. [PMID: 36170396 DOI: 10.1109/tvcg.2022.3209475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Multiple-view (MV) representations enabling multi-perspective exploration of large and complex data are often employed on 2D displays. The technique also shows great potential in addressing complex analytic tasks in immersive visualization. However, although useful, the design space of MV representations in immersive visualization lacks in deep exploration. In this paper, we propose a new perspective to this line of research, by examining the effects of view layout for MV representations on situated analytics. Specifically, we disentangle situated analytics in perspectives of situatedness regarding spatial relationship between visual representations and physical referents, and analytics regarding cross-view data analysis including filtering, refocusing, and connecting tasks. Through an in-depth analysis of existing layout paradigms, we summarize design trade-offs for achieving high situatedness and effective analytics simultaneously. We then distill a list of design requirements for a desired layout that balances situatedness and analytics, and develop a prototype system with an automatic layout adaptation method to fulfill the requirements. The method mainly includes a cylindrical paradigm for egocentric reference frame, and a force-directed method for proper view-view, view-user, and view-referent proximities and high view visibility. We conducted a formal user study that compares layouts by our method with linked and embedded layouts. Quantitative results show that participants finished filtering- and connecting-centered tasks significantly faster with our layouts, and user feedback confirms high usability of the prototype system.
Collapse
|
3
|
Lin T, Chen Z, Beyer J, Wu Y, Pfister H, Yang Y, Tory M, Keefe DF. The Ball is in Our Court: Conducting Visualization Research With Sports Experts. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2023; 43:84-90. [PMID: 37022362 DOI: 10.1109/mcg.2022.3222042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Most sports visualizations rely on a combination of spatial, highly temporal, and user-centric data, making sports a challenging target for visualization. Emerging technologies, such as augmented and mixed reality (AR/XR), have brought exciting opportunities along with new challenges for sports visualization. We share our experience working with sports domain experts and present lessons learned from conducting visualization research in SportsXR. In our previous work, we have targeted different types of users in sports, including athletes, game analysts, and fans. Each user group has unique design constraints and requirements, such as obtaining real-time visual feedback in training, automating the low-level video analysis workflow, or personalizing embedded visualizations for live game data analysis. In this article, we synthesize our best practices and pitfalls we identified while working on SportsXR. We highlight lessons learned in working with sports domain experts in designing and evaluating sports visualizations and in working with emerging AR/XR technologies. We envision that sports visualization research will benefit the larger visualization community through its unique challenges and opportunities for immersive and situated analytics.
Collapse
|
4
|
Dimara E, Zhang H, Tory M, Franconeri S. The Unmet Data Visualization Needs of Decision Makers Within Organizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4101-4112. [PMID: 33872153 DOI: 10.1109/tvcg.2021.3074023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
When an organization chooses one course of action over alternatives, this task typically falls on a decision maker with relevant knowledge, experience, and understanding of context. Decision makers rely on data analysis, which is either delegated to analysts, or done on their own. Often the decision maker combines data, likely uncertain or incomplete, with non-formalized knowledge within a multi-objective problem space, weighing the recommendations of analysts within broader contexts and goals. As most past research in visual analytics has focused on understanding the needs and challenges of data analysts, less is known about the tasks and challenges of organizational decision makers, and how visualization support tools might help. Here we characterize the decision maker as a domain expert, review relevant literature in management theories, and report the results of an empirical survey and interviews with people who make organizational decisions. We identify challenges and opportunities for novel visualization tools, including trade-off overviews, scenario-based analysis, interrogation tools, flexible data input and collaboration support. Our findings stress the need to expand visualization design beyond data analysis into tools for information management.
Collapse
|
5
|
Yao L, Bezerianos A, Vuillemot R, Isenberg P. Visualization in Motion: A Research Agenda and Two Evaluations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3546-3562. [PMID: 35727779 DOI: 10.1109/tvcg.2022.3184993] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
We contribute a research agenda for visualization in motion and two experiments to understand how well viewers can read data from moving visualizations. We define visualizations in motion as visual data representations that are used in contexts that exhibit relative motion between a viewer and an entire visualization. Sports analytics, video games, wearable devices, or data physicalizations are example contexts that involve different types of relative motion between a viewer and a visualization. To analyze the opportunities and challenges for designing visualization in motion, we show example scenarios and outline a first research agenda. Motivated primarily by the prevalence of and opportunities for visualizations in sports and video games we started to investigate a small aspect of our research agenda: the impact of two important characteristics of motion-speed and trajectory on a stationary viewer's ability to read data from moving donut and bar charts. We found that increasing speed and trajectory complexity did negatively affect the accuracy of reading values from the charts and that bar charts were more negatively impacted. In practice, however, this impact was small: both charts were still read fairly accurately.
Collapse
|
6
|
Lin T, Chen Z, Yang Y, Chiappalupi D, Beyer J, Pfister H. The Quest for Omnioculars: Embedded Visualization for Augmenting Basketball Game Viewing Experiences. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; PP:962-971. [PMID: 36155468 DOI: 10.1109/tvcg.2022.3209353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Sports game data is becoming increasingly complex, often consisting of multivariate data such as player performance stats, historical team records, and athletes' positional tracking information. While numerous visual analytics systems have been developed for sports analysts to derive insights, few tools target fans to improve their understanding and engagement of sports data during live games. By presenting extra data in the actual game views, embedded visualization has the potential to enhance fans' game-viewing experience. However, little is known about how to design such kinds of visualizations embedded into live games. In this work, we present a user-centered design study of developing interactive embedded visualizations for basketball fans to improve their live game-watching experiences. We first conducted a formative study to characterize basketball fans' in-game analysis behaviors and tasks. Based on our findings, we propose a design framework to inform the design of embedded visualizations based on specific data-seeking contexts. Following the design framework, we present five novel embedded visualization designs targeting five representative contexts identified by the fans, including shooting, offense, defense, player evaluation, and team comparison. We then developed Omnioculars, an interactive basketball game-viewing prototype that features the proposed embedded visualizations for fans' in-game data analysis. We evaluated Omnioculars in a simulated basketball game with basketball fans. The study results suggest that our design supports personalized in-game data analysis and enhances game understanding and engagement.
Collapse
|
7
|
The Hitchhiker’s Guide to Fused Twins: A Review of Access to Digital Twins In Situ in Smart Cities. REMOTE SENSING 2022. [DOI: 10.3390/rs14133095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Smart Cities already surround us, and yet they are still incomprehensibly far from directly impacting everyday life. While current Smart Cities are often inaccessible, the experience of everyday citizens may be enhanced with a combination of the emerging technologies Digital Twins (DTs) and Situated Analytics. DTs represent their Physical Twin (PT) in the real world via models, simulations, (remotely) sensed data, context awareness, and interactions. However, interaction requires appropriate interfaces to address the complexity of the city. Ultimately, leveraging the potential of Smart Cities requires going beyond assembling the DT to be comprehensive and accessible. Situated Analytics allows for the anchoring of city information in its spatial context. We advance the concept of embedding the DT into the PT through Situated Analytics to form Fused Twins (FTs). This fusion allows access to data in the location that it is generated in in an embodied context that can make the data more understandable. Prototypes of FTs are rapidly emerging from different domains, but Smart Cities represent the context with the most potential for FTs in the future. This paper reviews DTs, Situated Analytics, and Smart Cities as the foundations of FTs. Regarding DTs, we define five components (physical, data, analytical, virtual, and Connection Environments) that we relate to several cognates (i.e., similar but different terms) from existing literature. Regarding Situated Analytics, we review the effects of user embodiment on cognition and cognitive load. Finally, we classify existing partial examples of FTs from the literature and address their construction from Augmented Reality, Geographic Information Systems, Building/City Information Models, and DTs and provide an overview of future directions.
Collapse
|
8
|
Holländer K, Hoggenmüller M, Gruber R, Völkel ST, Butz A. Take It to the Curb: Scalable Communication Between Autonomous Cars and Vulnerable Road Users Through Curbstone Displays. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.844245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Automated driving will require new approaches to the communication between vehicles and vulnerable road users (VRUs) such as pedestrians, e.g., through external human–machine interfaces (eHMIs). However, the majority of eHMI concepts are neither scalable (i.e., take into account complex traffic scenarios with multiple vehicles and VRUs), nor do they optimize traffic flow. Speculating on the upgrade of traffic infrastructure in the automated city, we propose Smart Curbs, a scalable communication concept integrated into the curbstone. Using a combination of immersive and non-immersive prototypes, we evaluated the suitability of our concept for complex urban environments in a user study (N = 18). Comparing the approach to a projection-based eHMI, our findings reveal that Smart Curbs are safer to use, as our participants spent less time on the road when crossing. Based on our findings, we discuss the potential of Smart Curbs to mitigate the scalability problem in AV-pedestrian communication and simultaneously enhance traffic flow.
Collapse
|
9
|
Hedayati H, Suzuki R, Rees W, Leithinger D, Szafir D. Designing Expandable-Structure Robots for Human-Robot Interaction. Front Robot AI 2022; 9:719639. [PMID: 35480087 PMCID: PMC9035676 DOI: 10.3389/frobt.2022.719639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 03/15/2022] [Indexed: 11/13/2022] Open
Abstract
In this paper, we survey the emerging design space of expandable structures in robotics, with a focus on how such structures may improve human-robot interactions. We detail various implementation considerations for researchers seeking to integrate such structures in their own work and describe how expandable structures may lead to novel forms of interaction for a variety of different robots and applications, including structures that enable robots to alter their form to augment or gain entirely new capabilities, such as enhancing manipulation or navigation, structures that improve robot safety, structures that enable new forms of communication, and structures for robot swarms that enable the swarm to change shape both individually and collectively. To illustrate how these considerations may be operationalized, we also present three case studies from our own research in expandable structure robots, sharing our design process and our findings regarding how such structures enable robots to produce novel behaviors that may capture human attention, convey information, mimic emotion, and provide new types of dynamic affordances.
Collapse
Affiliation(s)
- Hooman Hedayati
- Department of Computer Science, University of Colorado, Boulder, CO, United States
| | - Ryo Suzuki
- Department of Computer Science, University of Calgary, Calgary, AB, Canada
| | - Wyatt Rees
- Department of Computer Science, University of Colorado, Boulder, CO, United States
| | - Daniel Leithinger
- Department of Computer Science, University of Colorado, Boulder, CO, United States
- ATLAS Institute, University of Colorado, Boulder, CO, United States
| | - Daniel Szafir
- Department of Computer Science, University of Colorado, Boulder, CO, United States
- ATLAS Institute, University of Colorado, Boulder, CO, United States
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- *Correspondence: Daniel Szafir,
| |
Collapse
|
10
|
SmartShots: An Optimization Approach for Generating Videos with Data Visualizations Embedded. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3484506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Videos are well-received methods for storytellers to communicate various narratives. To further engage viewers, we introduce a novel visual medium where data visualizations are embedded into videos to present data insights. However, creating such data-driven videos requires professional video editing skills, data visualization knowledge, and even design talents. To ease the difficulty, we propose an optimization method and develop SmartShots, which facilitates the automatic integration of in-video visualizations. For its development, we first collaborated with experts from different backgrounds, including information visualization, design, and video production. Our discussions led to a design space that summarizes crucial design considerations along three dimensions: visualization, embedded layout, and rhythm. Based on that, we formulated an optimization problem that aims to address two challenges: (1) embedding visualizations while considering both contextual relevance and aesthetic principles and (2) generating videos by assembling multi-media materials. We show how SmartShots solves this optimization problem and demonstrate its usage in three cases. Finally, we report the results of semi-structured interviews with experts and amateur users on the usability of SmartShots.
Collapse
|
11
|
Morais L, Jansen Y, Andrade N, Dragicevic P. Showing Data About People: A Design Space of Anthropographics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1661-1679. [PMID: 32903184 DOI: 10.1109/tvcg.2020.3023013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
When showing data about people, visualization designers and data journalists often use design strategies that presumably help the audience relate to those people. The term anthropographics has been recently coined to refer to this practice and the resulting visualizations. Anthropographics is a rich and growing area, but the work so far has remained scattered. Despite preliminary empirical work and a few web essays written by practitioners, there is a lack of clear language for thinking about and communicating about anthropographics. We address this gap by introducing a conceptual framework and a design space for anthropographics. Our design space consists of seven elementary design dimensions that can be reasonably hypothesized to have some effect on prosocial feelings or behavior. It extends a previous design space and is informed by an analysis of 105 visualizations collected from newspapers, websites, and research articles. We use our conceptual framework and design space to discuss trade-offs, common design strategies, as well as future opportunities for design and research in the area of anthropographics.
Collapse
|
12
|
One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation. COMPUTERS 2022. [DOI: 10.3390/computers11020020] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The opportunities for 3D visualisations are huge. People can be immersed inside their data, interface with it in natural ways, and see it in ways that are not possible on a traditional desktop screen. Indeed, 3D visualisations, especially those that are immersed inside head-mounted displays are becoming popular. Much of this growth is driven by the availability, popularity and falling cost of head-mounted displays and other immersive technologies. However, there are also challenges. For example, data visualisation objects can be obscured, important facets missed (perhaps behind the viewer), and the interfaces may be unfamiliar. Some of these challenges are not unique to 3D immersive technologies. Indeed, developers of traditional 2D exploratory visualisation tools would use alternative views, across a multiple coordinated view (MCV) system. Coordinated view interfaces help users explore the richness of the data. For instance, an alphabetical list of people in one view shows everyone in the database, while a map view depicts where they live. Each view provides a different task or purpose. While it is possible to translate some desktop interface techniques into the 3D immersive world, it is not always clear what equivalences would be. In this paper, using several case studies, we discuss the challenges and opportunities for using multiple views in immersive visualisation. Our aim is to provide a set of concepts that will enable developers to perform critical thinking, creative thinking and push the boundaries of what is possible with 3D and immersive visualisation. In summary developers should consider how to integrate many views, techniques and presentation styles, and one view is not enough when using 3D and immersive visualisations.
Collapse
|
13
|
Bressa N, Korsgaard H, Tabard A, Houben S, Vermeulen J. What's the Situation with Situated Visualization? A Survey and Perspectives on Situatedness. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:107-117. [PMID: 34587065 DOI: 10.1109/tvcg.2021.3114835] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Situated visualization is an emerging concept within visualization, in which data is visualized in situ, where it is relevant to people. The concept has gained interest from multiple research communities, including visualization, human-computer interaction (HCI) and augmented reality. This has led to a range of explorations and applications of the concept, however, this early work has focused on the operational aspect of situatedness leading to inconsistent adoption of the concept and terminology. First, we contribute a literature survey in which we analyze 44 papers that explicitly use the term "situated visualization" to provide an overview of the research area, how it defines situated visualization, common application areas and technology used, as well as type of data and type of visualizations. Our survey shows that research on situated visualization has focused on technology-centric approaches that foreground a spatial understanding of situatedness. Secondly, we contribute five perspectives on situatedness (space, time, place, activity, and community) that together expand on the prevalent notion of situatedness in the corpus. We draw from six case studies and prior theoretical developments in HCI. Each perspective develops a generative way of looking at and working with situatedness in design and research. We outline future directions, including considering technology, material and aesthetics, leveraging the perspectives for design, and methods for stronger engagement with target audiences. We conclude with opportunities to consolidate situated visualization research.
Collapse
|
14
|
An Efficient, Platform-Independent Map Rendering Framework for Mobile Augmented Reality. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2021. [DOI: 10.3390/ijgi10090593] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
With the extensive application of big spatial data and the emergence of spatial computing, augmented reality (AR) map rendering has attracted significant attention. A common issue in existing solutions is that AR-GIS systems rely on different platform-specific graphics libraries on different operating systems, and rendering implementations can vary across various platforms. This causes performance degradation and rendering styles that are not consistent across environments. However, high-performance rendering consistency across devices is critical in AR-GIS, especially for edge collaborative computing. In this paper, we present a high-performance, platform-independent AR-GIS rendering engine; the augmented reality universal graphics library (AUGL) engine. A unified cross-platform interface is proposed to preserve AR-GIS rendering style consistency across platforms. High-performance AR-GIS map symbol drawing models are defined and implemented based on a unified algorithm interface. We also develop a pre-caching strategy, optimized spatial-index querying, and a GPU-accelerated vector drawing algorithm that minimizes IO latency throughout the rendering process. Comparisons to existing AR-GIS visualization engines indicate that the performance of the AUGL engine is two times higher than that of the AR-GIS rendering engine on the Android, iOS, and Vuforia platforms. The drawing efficiency for vector polygons is improved significantly. The rendering performance is more than three times better than the average performances of existing Android and iOS systems.
Collapse
|
15
|
Zollmann S, Langlotz T, Grasset R, Lo WH, Mori S, Regenbrecht H. Visualization Techniques in Augmented Reality: A Taxonomy, Methods and Patterns. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3808-3825. [PMID: 32275601 DOI: 10.1109/tvcg.2020.2986247] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, the development of Augmented Reality (AR) frameworks made AR application development widely accessible to developers without AR expert background. With this development, new application fields for AR are on the rise. This comes with an increased need for visualization techniques that are suitable for a wide range of application areas. It becomes more important for a wider audience to gain a better understanding of existing AR visualization techniques. In this article we provide a taxonomy of existing works on visualization techniques in AR. The taxonomy aims to give researchers and developers without an in-depth background in Augmented Reality the information to successively apply visualization techniques in Augmented Reality environments. We also describe required components and methods and analyze common patterns.
Collapse
|
16
|
Butcher PWS, John NW, Ritsos PD. VRIA: A Web-Based Framework for Creating Immersive Analytics Experiences. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3213-3225. [PMID: 31944959 DOI: 10.1109/tvcg.2020.2965109] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present VRIA, a Web-based framework for creating Immersive Analytics (IA) experiences in Virtual Reality. VRIA is built upon WebVR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users, of different levels of expertise, to rapidly develop Immersive Analytics experiences for the Web. The use of these open-standards Web-based technologies allows us to implement VR experiences in a browser and offers strong synergies with popular visualization libraries, through the HTML Document Object Model (DOM). This makes VRIA ubiquitous and platform-independent. Moreover, by using WebVR's progressive enhancement, the experiences VRIA creates are accessible on a plethora of devices. We elaborate on our motivation for focusing on open-standards Web technologies, present the VRIA creation workflow and detail the underlying mechanics of our framework. We also report on techniques and optimizations necessary for implementing Immersive Analytics experiences on the Web, discuss scalability implications of our framework, and present a series of use case applications to demonstrate the various features of VRIA. Finally, we discuss current limitations of our framework, the lessons learned from its development, and outline further extensions.
Collapse
|
17
|
Guarese R, Andreasson P, Nilsson E, Maciel A. Augmented situated visualization methods towards electromagnetic compatibility testing. COMPUTERS & GRAPHICS 2021; 94:1-10. [PMID: 33082609 PMCID: PMC7560504 DOI: 10.1016/j.cag.2020.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Revised: 09/27/2020] [Accepted: 10/08/2020] [Indexed: 06/11/2023]
Abstract
In electrical engineering, hardware experts often need to analyze electromagnetic radiation data to detect any external interference or anomaly. The field that studies this sort of assessment is called electromagnetic compatibility (EMC). As a way to support EMC analysis, we propose the use of Augmented Situated Visualization (ASV) to supply professionals with visual and interactive information that helps them to comprehend that data, however situating it where it is most relevant in its spatial context. Users are able to interact with the visualization by changing the attributes being displayed, comparing the overlaps of multiple fields, and extracting data, as a way to refine their search. The solutions being proposed in this work were tested against each other in comparable 2D and 3D interactive visualizations of the same data in a series of data-extraction assessments with users, as a means to validate the approaches. Results exposed a correctness-time trade-off between the interaction methods. The hand-based techniques (Hand Slider and Touch Lens) were the least error-prone, being near to half as error-inducing as the gaze-based method. Touch Lens also performed as the least time-consuming method, taking in average less than half of the average time required by the others. For the visualization methods tested, the 2D ray casts presented a higher usability score and lesser workload index than the 3D topology view, however exposing over two times the error ratio. Ultimately, this work exposes how AR can help users to have better performances in a decision-making context, particularly in EMC related tasks, while also furthering the research in the ASV field.
Collapse
Affiliation(s)
- Renan Guarese
- Federal University of Rio Grande do Sul (UFRGS), Institute of Informatics (INF), Porto Alegre 91501-970, Brazil
- Royal Melbourne Institute of Technology (RMIT), Melbourne 3001, Australia
| | - Pererik Andreasson
- Halmstad University (HH), School of Information Technology, Halmstad 302-50, Sweden
| | - Emil Nilsson
- Halmstad University (HH), School of Information Technology, Halmstad 302-50, Sweden
| | - Anderson Maciel
- Federal University of Rio Grande do Sul (UFRGS), Institute of Informatics (INF), Porto Alegre 91501-970, Brazil
| |
Collapse
|
18
|
Ens B, Goodwin S, Prouzeau A, Anderson F, Wang FY, Gratzl S, Lucarelli Z, Moyle B, Smiley J, Dwyer T. Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1193-1203. [PMID: 33074810 DOI: 10.1109/tvcg.2020.3030334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short 'bursts' of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such 'casual collaborative' scenarios will require engaging features to draw users' attention, with intuitive, 'walk-up and use' interfaces. This paper presents Uplift, a novel prototype system to support 'casual collaborative visual analytics' for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.
Collapse
|
19
|
Reipschlager P, Flemisch T, Dachselt R. Personal Augmented Reality for Information Visualization on Large Interactive Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1182-1192. [PMID: 33052863 DOI: 10.1109/tvcg.2020.3030460] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this work we propose the combination of large interactive displays with personal head-mounted Augmented Reality (AR) for information visualization to facilitate data exploration and analysis. Even though large displays provide more display space, they are challenging with regard to perception, effective multi-user support, and managing data density and complexity. To address these issues and illustrate our proposed setup, we contribute an extensive design space comprising first, the spatial alignment of display, visualizations, and objects in AR space. Next, we discuss which parts of a visualization can be augmented. Finally, we analyze how AR can be used to display personal views in order to show additional information and to minimize the mutual disturbance of data analysts. Based on this conceptual foundation, we present a number of exemplary techniques for extending visualizations with AR and discuss their relation to our design space. We further describe how these techniques address typical visualization problems that we have identified during our literature research. To examine our concepts, we introduce a generic AR visualization framework as well as a prototype implementing several example techniques. In order to demonstrate their potential, we further present a use case walkthrough in which we analyze a movie data set. From these experiences, we conclude that the contributed techniques can be useful in exploring and understanding multivariate data. We are convinced that the extension of large displays with AR for information visualization has a great potential for data analysis and sense-making.
Collapse
|
20
|
Perovich LJ, Wylie SA, Bongiovanni R. Chemicals in the Creek: designing a situated data physicalization of open government data with the community. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:913-923. [PMID: 33079668 DOI: 10.1109/tvcg.2020.3030472] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Over the last decade growing amounts of government data have been made available in an attempt to increase transparency and civic participation, but it is unclear if this data serves non-expert communities due to gaps in access and the technical knowledge needed to interpret this "open" data. We conducted a two-year design study focused on the creation of a community-based data display using the United States Environmental Protection Agency data on water permit violations by oil storage facilities on the Chelsea Creek in Massachusetts to explore whether situated data physicalization and Participatory Action Research could support meaningful engagement with open data. We selected this data as it is of interest to local groups and available online, yet remains largely invisible and inaccessible to the Chelsea community. The resulting installation, Chemicals in the Creek, responds to the call for community-engaged visualization processes and provides an application of situated methods of data representation. It proposes event-centered and power-aware modes of engagement using contextual and embodied data representations. The design of Chemicals in the Creek is grounded in interactive workshops and we analyze it through event observation, interviews, and community outcomes. We reflect on the role of community engaged research in the Information Visualization community relative to recent conversations on new approaches to design studies and evaluation.
Collapse
|
21
|
Perovich LJ, Cai P, Guo A, Zimmerman K, Paseman K, Espinoza Silva D, Brody JG. Data Clothing and BigBarChart: Designing Physical Data Reports on Indoor Pollutants for Individuals and Communities. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2021; 41:87-98. [PMID: 32956039 PMCID: PMC7959249 DOI: 10.1109/mcg.2020.3025322] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In response to participant preferences and new ethics guidelines, researchers are increasingly sharing data with health study participants, including data on their own household chemical exposures. Data physicalization may be a useful tool for these communications, because it is thought to be accessible to a general audience and emotionally engaged. However, there are limited studies of data physicalization in the wild with diverse communities. Our application of this method in the Green Housing Study is an early example of using data physicalization in environmental health report-back. We gathered feedback through community meetings, prototype testing, and semistructured interviews, leading to the development of data t-shirts and other garments and person-sized bar charts. We found that participants were enthusiastic about data physicalizations, it connected them to their previous experience, and they had varying desires to share their data. Our findings suggest that researchers can enhance environmental communications by further developing the human experience of physicalizations and engaging diverse communities.
Collapse
|
22
|
Panagiotidou G, Gorucu S, Vande Moere A. Data Badges: Making an Academic Profile Through a DIY Wearable Physicalization. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2020; 40:51-60. [PMID: 32956041 DOI: 10.1109/mcg.2020.3025504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this pictorial, we present the design and making process of Data Badges as they were deployed during a one-week academic seminar. Data Badges are customizable physical conference badges that invite participants to make their own independent and personalized expressions of their academic profile by choosing and assembling a collection of predefined physical tokens on a flat wearable canvas. As our modular and intuitive design approach allows the construction to occur as a shared, collective activity, Data Badges take advantage of the creative, affective, and social values that underlie physicalization and its construction to engage participants in reflecting on personal data. Among other unexpected phenomena, we noticed how the freedom of assembly and interpretation encouraged a variety of appropriations, which expanded its intended representational space from fully representative to more resistive and provocative forms of data expression.
Collapse
|
23
|
Chen Z, Su Y, Wang Y, Wang Q, Qu H, Wu Y. MARVisT: Authoring Glyph-Based Visualization in Mobile Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2645-2658. [PMID: 30640614 DOI: 10.1109/tvcg.2019.2892415] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent advances in mobile augmented reality (AR) techniques have shed new light on personal visualization for their advantages of fitting visualization within personal routines, situating visualization in a real-world context, and arousing users' interests. However, enabling non-experts to create data visualization in mobile AR environments is challenging given the lack of tools that allow in-situ design while supporting the binding of data to AR content. Most existing AR authoring tools require working on personal computers or manually creating each virtual object and modifying its visual attributes. We systematically study this issue by identifying the specificity of AR glyph-based visualization authoring tool and distill four design considerations. Following these design considerations, we design and implement MARVisT, a mobile authoring tool that leverages information from reality to assist non-experts in addressing relationships between data and virtual glyphs, real objects and virtual glyphs, and real objects and data. With MARVisT, users without visualization expertise can bind data to real-world objects to create expressive AR glyph-based visualizations rapidly and effortlessly, reshaping the representation of the real world with data. We use several examples to demonstrate the expressiveness of MARVisT. A user study with non-experts is also conducted to evaluate the authoring experience of MARVisT.
Collapse
|
24
|
Whitlock M, Wu K, Szafir DA. Designing for Mobile and Immersive Visual Analytics in the Field. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:503-513. [PMID: 31425088 DOI: 10.1109/tvcg.2019.2934282] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Data collection and analysis in the field is critical for operations in domains such as environmental science and public safety. However, field workers currently face data- and platform-oriented issues in efficient data collection and analysis in the field, such as limited connectivity, screen space, and attentional resources. In this paper, we explore how visual analytics tools might transform field practices by more deeply integrating data into these operations. We use a design probe coupling mobile, cloud, and immersive analytics components to guide interviews with ten experts from five domains to explore how visual analytics could support data collection and analysis needs in the field. The results identify shortcomings of current approaches and target scenarios and design considerations for future field analysis systems. We embody these findings in FieldView, an extensible, open-source prototype designed to support critical use cases for situated field analysis. Our findings suggest the potential for integrating mobile and immersive technologies to enhance data's utility for various field operations and new directions for visual analytics tools to transform fieldwork.
Collapse
|
25
|
Dimara E, Perin C. What is Interaction for Data Visualization? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:119-129. [PMID: 31425089 DOI: 10.1109/tvcg.2019.2934283] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Interaction is fundamental to data visualization, but what "interaction" means in the context of visualization is ambiguous and confusing. We argue that this confusion is due to a lack of consensual definition. To tackle this problem, we start by synthesizing an inclusive view of interaction in the visualization community - including insights from information visualization, visual analytics and scientific visualization, as well as the input of both senior and junior visualization researchers. Once this view takes shape, we look at how interaction is defined in the field of human-computer interaction (HCI). By extracting commonalities and differences between the views of interaction in visualization and in HCI, we synthesize a definition of interaction for visualization. Our definition is meant to be a thinking tool and inspire novel and bolder interaction design practices. We hope that by better understanding what interaction in visualization is and what it can be, we will enrich the quality of interaction in visualization systems and empower those who use them.
Collapse
|
26
|
Offenhuber D. Data by Proxy - Material Traces as Autographic Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:98-108. [PMID: 31443017 DOI: 10.1109/tvcg.2019.2934788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Information visualization limits itself, per definition, to the domain of symbolic information. This paper discusses arguments why the field should also consider forms of data that are not symbolically encoded, including physical traces and material indicators. Continuing a provocation presented by Pat Hanrahan in his 2004 IEEE Vis capstone address, this paper compares physical traces to visualizations and describes the techniques and visual practices for producing, revealing, and interpreting them. By contrasting information visualization with a speculative counter model of autographic visualization, this paper examines the design principles for material data. Autographic visualization addresses limitations of information visualization, such as the inability to directly reflect the material circumstances of data generation. The comparison between the two models allows probing the epistemic assumptions behind information visualization and uncovers linkages with the rich history of scientific visualization and trace reading. The paper begins by discussing the gap between data visualizations and their corresponding phenomena and proceeds by investigating how material visualizations can bridge this gap. It contextualizes autographic visualization with paradigms such as data physicalization and indexical visualization and grounds it in the broader theoretical literature of semiotics, science and technology studies (STS), and the history of scientific representation. The main section of the paper proposes a foundational design vocabulary for autographic visualization and offers examples of how citizen scientists already use autographic principles in their displays, which seem to violate the canonical principles of information visualization but succeed at fulfilling other rhetorical purposes in evidence construction. The paper concludes with a discussion of the limitations of autographic visualization, a roadmap for the empirical investigation of trace perception, and thoughts about how information visualization and autographic visualization techniques can contribute to each other.
Collapse
|
27
|
Rowen A, Grabowski M, Rancy JP, Crane A. Impacts of Wearable Augmented Reality Displays on operator performance, Situation Awareness, and communication in safety-critical systems. APPLIED ERGONOMICS 2019; 80:17-27. [PMID: 31280802 DOI: 10.1016/j.apergo.2019.04.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 03/22/2019] [Accepted: 04/26/2019] [Indexed: 06/09/2023]
Abstract
Wearable Augmented Reality Displays (WARDs) present situated, real-time information visually, providing immediate access to information to support decision making. The impacts of WARD use on operator performance, Situation Awareness (SA), and communication in one safety-critical system, marine transportation, were examined in a real-time physical simulator. WARD use improved operator trackkeeping performance, the practice of good seamanship, and SA, although operator responsiveness decreased. WARD users who used more closed-loop communication and information sharing showed improved threat avoidance, suggesting that operators can avoid accidents and failure through WARD use that promotes sharing and confirming information. WARD use also promoted information source diversity, a means of developing requisite variety. These operational impacts are important in safety-critical settings where failures can be catastrophic.
Collapse
Affiliation(s)
- Aaron Rowen
- Industrial and Systems Engineering, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY, 12180-3590, United States.
| | - Martha Grabowski
- Industrial and Systems Engineering, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY, 12180-3590, United States; Information Systems, Madden School of Business, Le Moyne College, 1419 Salt Springs Road, Syracuse, NY, 132214, United States.
| | - Jean-Philippe Rancy
- School of Information Studies, Syracuse University, 343 Hinds Hall, Syracuse, NY, 13244, United States.
| | - Alyssa Crane
- Information Systems, Madden School of Business, Le Moyne College, 1419 Salt Springs Road, Syracuse, NY, 132214, United States.
| |
Collapse
|
28
|
Sicat R, Li J, Choi J, Cordeil M, Jeong WK, Bach B, Pfister H. DXR: A Toolkit for Building Immersive Data Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:715-725. [PMID: 30136991 DOI: 10.1109/tvcg.2018.2865152] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.
Collapse
|
29
|
“It’s like holding a human heart”: the design of Vital + Morph, a shape-changing interface for remote monitoring. AI & SOCIETY 2018. [DOI: 10.1007/s00146-017-0752-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
30
|
Hull CH, Willett W. Data Tectonics: A Framework for Building Physical and Immersive Data Representations. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2018; 38:11-17. [PMID: 30273123 DOI: 10.1109/mcg.2018.053491726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper introduces the concept of data tectonics: a unifying principle that structures the physical and conceptual relationships between six elements: context, data, representation, materiality, fabrication method and interactions, to create meaningful data experiences.
Collapse
|
31
|
|