1
|
Yu P, Nordman A, Koc-Januchta M, Schonborn K, Besancon L, Vrotsou K. Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:831-841. [PMID: 39255130 DOI: 10.1109/tvcg.2024.3456187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
We present a visual analytics approach for multi-level visual exploration of users' interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom's taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as "cascading" and "nested-loop", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.
Collapse
|
2
|
Nam JW, Isenberg T, Keefe DF. V-Mail: 3D-Enabled Correspondence About Spatial Data on (Almost) All Your Devices. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1853-1867. [PMID: 37015540 DOI: 10.1109/tvcg.2022.3229017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail "data stories" are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.
Collapse
|
3
|
Kouril D, Strnad O, Mindek P, Halladjian S, Isenberg T, Groller ME, Viola I. Molecumentary: Adaptable Narrated Documentaries Using Molecular Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1733-1747. [PMID: 34822330 DOI: 10.1109/tvcg.2021.3130670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present a method for producing documentary-style content using real-time scientific visualization. We introduce molecumentaries, i.e., molecular documentaries featuring structural models from molecular biology, created through adaptable methods instead of the rigid traditional production pipeline. Our work is motivated by the rapid evolution of scientific visualization and it potential in science dissemination. Without some form of explanation or guidance, however, novices and lay-persons often find it difficult to gain insights from the visualization itself. We integrate such knowledge using the verbal channel and provide it along an engaging visual presentation. To realize the synthesis of a molecumentary, we provide technical solutions along two major production steps: (1) preparing a story structure and (2) turning the story into a concrete narrative. In the first step, we compile information about the model from heterogeneous sources into a story graph. We combine local knowledge with external sources to complete the story graph and enrich the final result. In the second step, we synthesize a narrative, i.e., story elements presented in sequence, using the story graph. We then traverse the story graph and generate a virtual tour, using automated camera and visualization transitions. We turn texts written by domain experts into verbal representations using text-to-speech functionality and provide them as a commentary. Using the described framework, we synthesize fly-throughs with descriptions: automatic ones that mimic a manually authored documentary or semi-automatic ones which guide the documentary narrative solely through curated textual input.
Collapse
|
4
|
Latif S, Tarner H, Beck F. Talking Realities: Audio Guides in Virtual Reality Visualizations. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:73-83. [PMID: 33560980 DOI: 10.1109/mcg.2021.3058129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Building upon the ideas of storytelling and explorable explanations, we introduce Talking Realities, a concept for producing data-driven interactive narratives in virtual reality. It combines an audio narrative with an immersive visualization to communicate analysis results. The narrative is automatically produced using template-based natural language generation and adapts to data and user interactions. The synchronized animation of visual elements in accordance with the audio connects the two representations. In addition, we discuss various modes of explanation ranging from fully guided tours to free exploration of the data. We demonstrate the applicability of our concept by developing a virtual reality visualization for air traffic data. Furthermore, generalizability is exhibited by sketching mock-ups for two more application scenarios in the context of information and scientific visualization.
Collapse
|
5
|
Making time/breaking time: critical literacy and politics of time in data visualisation. JOURNAL OF DOCUMENTATION 2021. [DOI: 10.1108/jd-12-2020-0210] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeRepresentations of time are commonly used to construct narratives in visualisations of data. However, since time is a value-laden concept, and no representation can provide a full, objective account of “temporal reality”, they are also biased and political: reproducing and reinforcing certain views and values at the expense of alternative ones. This conceptual paper aims to explore expressions of temporal bias and politics in data visualisation, along with possibly mitigating user approaches and design strategies.Design/methodology/approachThis study presents a theoretical framework rooted in a sociotechnical view of representations as biased and political, combined with perspectives from critical literacy, radical literacy and critical design. The framework provides a basis for discussion of various types and effects of temporal bias in visualisation. Empirical examples from previous research and public resources illustrate the arguments.FindingsFour types of political effects of temporal bias in visualisations are presented, expressed as limitation of view, disregard of variation, oppression of social groups and misrepresentation of topic and suggest that appropriate critical and radical literacy approaches require users and designers to critique, contextualise, counter and cross beyond expressions of the same. Supporting critical design strategies involve the inclusion of multiple datasets and representations; broad access to flexible tools; and inclusive participation of marginalised groups.Originality/valueThe paper draws attention to a vital, yet little researched problem of temporal representation in visualisations of data. It offers a pioneering bridging of critical literacy, radical literacy and critical design and emphasises mutual rather than contradictory interests of the empirical sciences and humanities.
Collapse
|
6
|
Costa J, Bock A, Emmart C, Hansen C, Ynnerman A, Silva C. Interactive Visualization of Atmospheric Effects for Celestial Bodies. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:785-795. [PMID: 33048680 DOI: 10.1109/tvcg.2020.3030333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
We present an atmospheric model tailored for the interactive visualization of planetary surfaces. As the exploration of the solar system is progressing with increasingly accurate missions and instruments, the faithful visualization of planetary environments is gaining increasing interest in space research, mission planning, and science communication and education. Atmospheric effects are crucial in data analysis and to provide contextual information for planetary data. Our model correctly accounts for the non-linear path of the light inside the atmosphere (in Earth's case), the light absorption effects by molecules and dust particles, such as the ozone layer and the Martian dust, and a wavelength-dependent phase function for Mie scattering. The mode focuses on interactivity, versatility, and customization, and a comprehensive set of interactive controls make it possible to adapt its appearance dynamically. We demonstrate our results using Earth and Mars as examples. However, it can be readily adapted for the exploration of other atmospheres found on, for example, of exoplanets. For Earth's atmosphere, we visually compare our results with pictures taken from the International Space Station and against the CIE clear sky model. The Martian atmosphere is reproduced based on available scientific data, feedback from domain experts, and is compared to images taken by the Curiosity rover. The work presented here has been implemented in the OpenSpace system, which enables interactive parameter setting and real-time feedback visualization targeting presentations in a wide range of environments, from immersive dome theaters to virtual reality headsets.
Collapse
|
7
|
Host G, Palmerius K, Schonborn K. Nano for the Public: An Exploranation Perspective. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2020; 40:32-42. [PMID: 32070944 DOI: 10.1109/mcg.2020.2973120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Public understanding of contemporary scientific issues is critical for the future of society. Public spaces, such as science centers, can impact the communication of science by providing active knowledge-building experiences of scientific phenomena. In contributing to this vision, we have previously developed an interactive visualization as part of a public exhibition about nano. We reflect on how the immersive design and features of the exhibit contribute as a tool for science communication in light of the emerging paradigm of exploranation, and offer some forward-looking perspectives about what this notion has to offer the domain.
Collapse
|
8
|
Murchie KJ, Diomede D. Fundamentals of graphic design—essential tools for effective visual science communication. Facets (Ott) 2020. [DOI: 10.1139/facets-2018-0049] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Guidance on improving the visual aspects of science communication range from “recipe”-style instructions to hyper-focused aspects of data visualization. Currently lacking in the peer-reviewed literature is a primer in graphic design tailored to a high-level overview of basic design principles and associated jargon related to layout, imagery, typeface, and colour. We illustrate why these aspects are important to effective communication. Further, we provide considerations on when to solicit professional assistance and what to expect when working with graphic designers. Having the fundamental principles of good design in your toolbox facilitates the production of effective visual communication related to your research and fruitful scientist–designer collaborations.
Collapse
Affiliation(s)
- Karen J. Murchie
- Daniel P. Haerther Center for Conservation and Research, John G. Shedd Aquarium, 1200 South Lake Shore Drive, Chicago, IL 60605, USA
| | - Dylan Diomede
- Diomedesign, 563 Sunset Ave., West Chicago, IL 60185, USA
| |
Collapse
|
9
|
Bock A, Axelsson E, Costa J, Payne G, Acinapura M, Trakinski V, Emmart C, Silva C, Hansen C, Ynnerman A. OpenSpace: A System for Astrographics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:633-642. [PMID: 31425082 DOI: 10.1109/tvcg.2019.2934259] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Human knowledge about the cosmos is rapidly increasing as instruments and simulations are generating new data supporting the formation of theory and understanding of the vastness and complexity of the universe. OpenSpace is a software system that takes on the mission of providing an integrated view of all these sources of data and supports interactive exploration of the known universe from the millimeter scale showing instruments on spacecrafts to billions of light years when visualizing the early universe. The ambition is to support research in astronomy and space exploration, science communication at museums and in planetariums as well as bringing exploratory astrographics to the class room. There is a multitude of challenges that need to be met in reaching this goal such as the data variety, multiple spatio-temporal scales, collaboration capabilities, etc. Furthermore, the system has to be flexible and modular to enable rapid prototyping and inclusion of new research results or space mission data and thereby shorten the time from discovery to dissemination. To support the different use cases the system has to be hardware agnostic and support a range of platforms and interaction paradigms. In this paper we describe how OpenSpace meets these challenges in an open source effort that is paving the path for the next generation of interactive astrographics.
Collapse
|
10
|
Mumtaz H, Latif S, Beck F, Weiskopf D. Exploranative Code Quality Documents. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1129-1139. [PMID: 31443011 DOI: 10.1109/tvcg.2019.2934669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
Collapse
|
11
|
What Biological Visualizations Do Science Center Visitors Prefer in an Interactive Touch Table? EDUCATION SCIENCES 2018. [DOI: 10.3390/educsci8040166] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Hands-on digital interactivity in science centers provides new communicative opportunities. The Microcosmos multi-touch table allows visitors to interact with 64 image “cards” of (sub)microscopic biological structures and processes embedded across seven theme categories. This study presents the integration of biological content, interactive features and logging capabilities into the table, and analyses visitors’ usage and preferences. Data logging recorded 2,070,350 events including activated category, selected card, and various finger-based gestures. Visitors interacted with all cards during 858 sessions (96 s on average). Finger movements covered an average accumulated distance of 4.6 m per session, and about 56% of card interactions involved two fingers. Visitors made 5.53 category switches per session on average, and the virus category was most activated (average 0.96 per session). An overall ranking score related to card attractive power and holding power revealed that six of the most highly used cards depicted viruses and four were colourful instrument output images. The large finger traversal distance and proportion of two-finger card interaction may indicate the intuitiveness of the gestures. Observed trends in visitor engagement with the biological visualizations are considered in terms of construal level theory. Future work will examine how interactions are related to potential learning of biological content.
Collapse
|