1
|
Offenwanger A, Tsandilas T, Chevalier F. DataGarden: Formalizing Personal Sketches into Structured Visualization Templates. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1268-1278. [PMID: 39255138 DOI: 10.1109/tvcg.2024.3456336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Sketching is a common practice among visualization designers and serves an approachable entry to data visualization for non-experts. However, moving from a sketch to a full fledged data visualization often requires throwing away the original sketch and recreating it from scratch. Our goal is to formalize these sketches, enabling them to support iteration and systematic data mapping through a visual-first templating workflow. In this workflow, authors sketch a representative visualization and structure it into an expressive template for an envisioned or partial dataset, capturing implicit style as well as explicit data mappings. To demonstrate our proposed workflow, we implement DataGarden and evaluate it through a reproduction and a freeform study. We investigate how DataGarden supports personal expression and delve into the variety of visualizations that authors can produce with it, identifying cases that demonstrate the limitations of our approach and discussing avenues for future work.
Collapse
|
2
|
L'Yi S, van den Brandt A, Adams E, Nguyen HN, Gehlenborg N. Learnable and Expressive Visualization Authoring Through Blended Interfaces. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:459-469. [PMID: 39255109 PMCID: PMC11875996 DOI: 10.1109/tvcg.2024.3456598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces-template-based, shelf configuration, natural language, and code editor-that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce unfamiliar visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.
Collapse
|
3
|
van den Brandt A, L'Yi S, Nguyen HN, Vilanova A, Gehlenborg N. Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1180-1190. [PMID: 39288066 PMCID: PMC11875953 DOI: 10.1109/tvcg.2024.3456298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/19/2024]
Abstract
Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users' intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques. All supplemental materials are available at https://osf.io/bdj4v/.
Collapse
|
4
|
Lin H, Akbaba D, Meyer M, Lex A. Data Hunches: Incorporating Personal Knowledge into Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:504-514. [PMID: 36155455 DOI: 10.1109/tvcg.2022.3209451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The trouble with data is that it frequently provides only an imperfect representation of a phenomenon of interest. Experts who are familiar with their datasets will often make implicit, mental corrections when analyzing a dataset, or will be cautious not to be overly confident about their findings if caveats are present. However, personal knowledge about the caveats of a dataset is typically not incorporated in a structured way, which is problematic if others who lack that knowledge interpret the data. In this work, we define such analysts' knowledge about datasets as data hunches. We differentiate data hunches from uncertainty and discuss types of hunches. We then explore ways of recording data hunches, and, based on a prototypical design, develop recommendations for designing visualizations that support data hunches. We conclude by discussing various challenges associated with data hunches, including the potential for harm and challenges for trust and privacy. We envision that data hunches will empower analysts to externalize their knowledge, facilitate collaboration and communication, and support the ability to learn from others' data hunches.
Collapse
|
5
|
Quadri GJ, Rosen P. A Survey of Perception-Based Visualization Studies by Task. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:5026-5048. [PMID: 34283717 DOI: 10.1109/tvcg.2021.3098240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Knowledge of human perception has long been incorporated into visualizations to enhance their quality and effectiveness. The last decade, in particular, has shown an increase in perception-based visualization research studies. With all of this recent progress, the visualization community lacks a comprehensive guide to contextualize their results. In this report, we provide a systematic and comprehensive review of research studies on perception related to visualization. This survey reviews perception-focused visualization studies since 1980 and summarizes their research developments focusing on low-level tasks, further breaking techniques down by visual encoding and visualization type. In particular, we focus on how perception is used to evaluate the effectiveness of visualizations, to help readers understand and apply the principles of perception of their visualization designs through a task-optimized approach. We concluded our report with a summary of the weaknesses and open research questions in the area.
Collapse
|
6
|
Park J, Han S, Lee SM. Restored Action Generative Adversarial Imitation Learning from observation for robot manipulator. ISA TRANSACTIONS 2022; 129:684-690. [PMID: 35292172 DOI: 10.1016/j.isatra.2022.02.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 02/21/2022] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
In this paper, a new imitation learning algorithm is proposed based on the Restored Action Generative Adversarial Imitation Learning (RAGAIL) from observation. An action policy is trained to move a robot manipulator similar to a demonstrator's behavior by using the restored action from state-only demonstration. To imitate the demonstrator, the trajectory is generated by Recurrent Generative Adversarial Networks (RGAN), and the action is restored from the output of the tracking controller constructed by the state and the generated target trajectory. The proposed imitation learning algorithm is not required to access the demonstrator's action (internal control signal such as force/torque command) and provides better learning performances. The effectiveness of the proposed method is validated through the experimental results of the robot manipulator.
Collapse
Affiliation(s)
- Jongcheon Park
- Cyber Physical Systems & Control Laboratory, School of Electronic and Electrical Engineering, Kyungpook National University, Daehak-ro 80, Republic of Korea
| | - Seungyong Han
- Cyber Physical Systems & Control Laboratory, School of Electronic and Electrical Engineering, Kyungpook National University, Daehak-ro 80, Republic of Korea
| | - S M Lee
- Cyber Physical Systems & Control Laboratory, School of Electronic and Electrical Engineering, Kyungpook National University, Daehak-ro 80, Republic of Korea.
| |
Collapse
|
7
|
Fujiwara T, Wei X, Zhao J, Ma KL. Interactive Dimensionality Reduction for Comparative Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:758-768. [PMID: 34591765 DOI: 10.1109/tvcg.2021.3114807] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Finding the similarities and differences between groups of datasets is a fundamental analysis task. For high-dimensional data, dimensionality reduction (DR) methods are often used to find the characteristics of each group. However, existing DR methods provide limited capability and flexibility for such comparative analysis as each method is designed only for a narrow analysis target, such as identifying factors that most differentiate groups. This paper presents an interactive DR framework where we integrate our new DR method, called ULCA (unified linear comparative analysis), with an interactive visual interface. ULCA unifies two DR schemes, discriminant analysis and contrastive learning, to support various comparative analysis tasks. To provide flexibility for comparative analysis, we develop an optimization algorithm that enables analysts to interactively refine ULCA results. Additionally, the interactive visualization interface facilitates interpretation and refinement of the ULCA results. We evaluate ULCA and the optimization algorithm to show their efficiency as well as present multiple case studies using real-world datasets to demonstrate the usefulness of this framework.
Collapse
|
8
|
Das S, Saket B, Kwon BC, Endert A. Geono-Cluster: Interactive Visual Cluster Analysis for Biologists. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4401-4412. [PMID: 32746262 DOI: 10.1109/tvcg.2020.3002166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Biologists often perform clustering analysis to derive meaningful patterns, relationships, and structures from data instances and attributes. Though clustering plays a pivotal role in biologists' data exploration, it takes non-trivial efforts for biologists to find the best grouping in their data using existing tools. Visual cluster analysis is currently performed either programmatically or through menus and dialogues in many tools, which require parameter adjustments over several steps of trial-and-error. In this article, we introduce Geono-Cluster, a novel visual analysis tool designed to support cluster analysis for biologists who do not have formal data science training. Geono-Cluster enables biologists to apply their domain expertise into clustering results by visually demonstrating how their expected clustering outputs should look like with a small sample of data instances. The system then predicts users' intentions and generates potential clustering results. Our study follows the design study protocol to derive biologists' tasks and requirements, design the system, and evaluate the system with experts on their own dataset. Results of our study with six biologists provide initial evidence that Geono-Cluster enables biologists to create, refine, and evaluate clustering results to effectively analyze their data and gain data-driven insights. At the end, we discuss lessons learned and implications of our study.
Collapse
|
9
|
An evolutional model for operation-driven visualization design. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00784-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
10
|
Liu R, Wang H, Zhang C, Chen X, Wang L, Ji G, Zhao B, Mao Z, Yang D. Narrative Scientific Data Visualization in an Immersive Environment. Bioinformatics 2021; 37:2033–2041. [PMID: 33538809 DOI: 10.1093/bioinformatics/btab052] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 01/09/2021] [Accepted: 01/21/2021] [Indexed: 11/14/2022] Open
Abstract
MOTIVATION Narrative visualization for scientific data explorations can help users better understand the domain knowledge, because narrative visualizations often present a sequence of facts and observations linked together by a unifying theme or argument. Narrative visualization in immersive environments can provide users with an intuitive experience to interactively explore the scientific data, because immersive environments provide a brand new strategy for interactive scientific data visualization and exploration. However, it is challenging to develop narrative scientific visualization in immersive environments. In this paper, we propose an immersive narrative visualization tool to create and customize scientific data explorations for ordinary users with little knowledge about programming on scientific visualization, They are allowed to define POIs (point of interests) conveniently by the handler of an immersive device. RESULTS Automatic exploration animations with narrative annotations can be generated by the gradual transitions between consecutive POI pairs. Besides, interactive slicing can be also controlled by device handler. Evaluations including user study and case study are designed and conducted to show the usability and effectiveness of the proposed tool. AVAILABILITY Related information can be accessed at: https://dabigtou.github.io/richenliu/.
Collapse
Affiliation(s)
- Richen Liu
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Hailong Wang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Chuyu Zhang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Xiaojian Chen
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Lijun Wang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Genlin Ji
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Bin Zhao
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Zhiwei Mao
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Dan Yang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| |
Collapse
|
11
|
Srinivasan A, Lee B, Stasko J. Interweaving Multimodal Interaction With Flexible Unit Visualizations for Data Exploration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3519-3533. [PMID: 32149639 DOI: 10.1109/tvcg.2020.2978050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multimodal interfaces that combine direct manipulation and natural language have shown great promise for data visualization. Such multimodal interfaces allow people to stay in the flow of their visual exploration by leveraging the strengths of one modality to complement the weaknesses of others. In this article, we introduce an approach that interweaves multimodal interaction combining direct manipulation and natural language with flexible unit visualizations. We employ the proposed approach in a proof-of-concept system, DataBreeze. Coupling pen, touch, and speech-based multimodal interaction with flexible unit visualizations, DataBreeze allows people to create and interact with both systematically bound (e.g., scatterplots, unit column charts) and manually customized views, enabling a novel visual data exploration experience. We describe our design process along with DataBreeze's interface and interactions, delineating specific aspects of the design that empower the synergistic use of multiple modalities. We also present a preliminary user study with DataBreeze, highlighting the data exploration patterns that participants employed. Finally, reflecting on our design process and preliminary user study, we discuss future research directions.
Collapse
|
12
|
Tsandilas T. StructGraphics: Flexible Visualization Design through Data-Agnostic and Reusable Graphical Structures. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:315-325. [PMID: 33048753 DOI: 10.1109/tvcg.2020.3030476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Information visualization research has developed powerful systems that enable users to author custom data visualizations without textual programming. These systems can support graphics-driven practices by bridging lazy data-binding mechanisms with vector-graphics editing tools. Yet, despite their expressive power, visualization authoring systems often assume that users want to generate visual representations that they already have in mind rather than explore designs. They also impose a data-to-graphics workflow, where binding data dimensions to graphical properties is a necessary step for generating visualization layouts. In this paper, we introduce StructGraphics, an approach for creating data-agnostic and fully reusable visualization designs. StructGraphics enables designers to construct visualization designs by drawing graphics on a canvas and then structuring their visual properties without relying on a concrete dataset or data schema. In StructGraphics, tabular data structures are derived directly from the structure of the graphics. Later, designers can link these structures with real datasets through a spreadsheet user interface. StructGraphics supports the design and reuse of complex data visualizations by combining graphical property sharing, by-example design specification, and persistent layout constraints. We demonstrate the power of the approach through a gallery of visualization examples and reflect on its strengths and limitations in interaction with graphic designers and data visualization experts.
Collapse
|
13
|
Zong J, Barnwal D, Neogy R, Satyanarayan A. Lyra 2: Designing Interactive Visualizations by Demonstration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:304-314. [PMID: 33048697 DOI: 10.1109/tvcg.2020.3030367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Recent graphical interfaces offer direct manipulation mechanisms for authoring visualizations, but are largely restricted to static output. To author interactive visualizations, users must instead turn to textual specification, but such approaches impose a higher technical burden. To bridge this gap, we introduce Lyra 2, a system that extends a prior visualization design environment with novel methods for authoring interaction techniques by demonstration. Users perform an interaction (e.g., button clicks, drags, or key presses) directly on the visualization they are editing. The system interprets this performance using a set of heuristics and enumerates suggestions of possible interaction designs. These heuristics account for the properties of the interaction (e.g., target and event type) as well as the visualization (e.g., mark and scale types, and multiple views). Interaction design suggestions are displayed as thumbnails; users can preview and test these suggestions, iteratively refine them through additional demonstrations, and finally apply and customize them via property inspectors. We evaluate our approach through a gallery of diverse examples, and evaluate its usability through a first-use study and via an analysis of its cognitive dimensions. We find that, in Lyra 2, interaction design by demonstration enables users to rapidly express a wide range of interactive visualizations.
Collapse
|
14
|
Rubab S, Tang J, Wu Y. Examining interaction techniques in data visualization authoring tools from the perspective of goals and human cognition: a survey. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-020-00705-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
15
|
Chen Z, Su Y, Wang Y, Wang Q, Qu H, Wu Y. MARVisT: Authoring Glyph-Based Visualization in Mobile Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2645-2658. [PMID: 30640614 DOI: 10.1109/tvcg.2019.2892415] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent advances in mobile augmented reality (AR) techniques have shed new light on personal visualization for their advantages of fitting visualization within personal routines, situating visualization in a real-world context, and arousing users' interests. However, enabling non-experts to create data visualization in mobile AR environments is challenging given the lack of tools that allow in-situ design while supporting the binding of data to AR content. Most existing AR authoring tools require working on personal computers or manually creating each virtual object and modifying its visual attributes. We systematically study this issue by identifying the specificity of AR glyph-based visualization authoring tool and distill four design considerations. Following these design considerations, we design and implement MARVisT, a mobile authoring tool that leverages information from reality to assist non-experts in addressing relationships between data and virtual glyphs, real objects and virtual glyphs, and real objects and data. With MARVisT, users without visualization expertise can bind data to real-world objects to create expressive AR glyph-based visualizations rapidly and effortlessly, reshaping the representation of the real world with data. We use several examples to demonstrate the expressiveness of MARVisT. A user study with non-experts is also conducted to evaluate the authoring experience of MARVisT.
Collapse
|
16
|
Hoque E, Agrawala M. Searching the Visual Style and Structure of D3 Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1236-1245. [PMID: 31442980 DOI: 10.1109/tvcg.2019.2934431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a search engine for D3 visualizations that allows queries based on their visual style and underlying structure. To build the engine we crawl a collection of 7860 D3 visualizations from the Web and deconstruct each one to recover its data, its data-encoding marks and the encodings describing how the data is mapped to visual attributes of the marks. We also extract axes and other non-data-encoding attributes of marks (e.g., typeface, background color). Our search engine indexes this style and structure information as well as metadata about the webpage containing the chart. We show how visualization developers can search the collection to find visualizations that exhibit specific design characteristics and thereby explore the space of possible designs. We also demonstrate how researchers can use the search engine to identify commonly used visual design patterns and we perform such a demographic design analysis across our collection of D3 charts. A user study reveals that visualization developers found our style and structure based search engine to be significantly more useful and satisfying for finding different designs of D3 charts, than a baseline search engine that only allows keyword search over the webpage containing a chart.
Collapse
|
17
|
Saket B, Huron S, Perin C, Endert A. Investigating Direct Manipulation of Graphical Encodings as a Method for User Interaction. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:482-491. [PMID: 31442983 DOI: 10.1109/tvcg.2019.2934534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We investigate direct manipulation of graphical encodings as a method for interacting with visualizations. There is an increasing interest in developing visualization tools that enable users to perform operations by directly manipulating graphical encodings rather than external widgets such as checkboxes and sliders. Designers of such tools must decide which direct manipulation operations should be supported, and identify how each operation can be invoked. However, we lack empirical guidelines for how people convey their intended operations using direct manipulation of graphical encodings. We address this issue by conducting a qualitative study that examines how participants perform 15 operations using direct manipulation of standard graphical encodings. From this study, we 1) identify a list of strategies people employ to perform each operation, 2) observe commonalities in strategies across operations, and 3) derive implications to help designers leverage direct manipulation of graphical encoding as a method for user interaction.
Collapse
|
18
|
Dimara E, Perin C. What is Interaction for Data Visualization? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:119-129. [PMID: 31425089 DOI: 10.1109/tvcg.2019.2934283] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Interaction is fundamental to data visualization, but what "interaction" means in the context of visualization is ambiguous and confusing. We argue that this confusion is due to a lack of consensual definition. To tackle this problem, we start by synthesizing an inclusive view of interaction in the visualization community - including insights from information visualization, visual analytics and scientific visualization, as well as the input of both senior and junior visualization researchers. Once this view takes shape, we look at how interaction is defined in the field of human-computer interaction (HCI). By extracting commonalities and differences between the views of interaction in visualization and in HCI, we synthesize a definition of interaction for visualization. Our definition is meant to be a thinking tool and inspire novel and bolder interaction design practices. We hope that by better understanding what interaction in visualization is and what it can be, we will enrich the quality of interaction in visualization systems and empower those who use them.
Collapse
|
19
|
Saket B, Endert A, Demiralp C. Task-Based Effectiveness of Basic Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2505-2512. [PMID: 29994001 DOI: 10.1109/tvcg.2018.2829750] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Visualizations of tabular data are widely used; understanding their effectiveness in different task and data contexts is fundamental to scaling their impact. However, little is known about how basic tabular data visualizations perform across varying data analysis tasks. In this paper, we report results from a crowdsourced experiment to evaluate the effectiveness of five small scale (5-34 data points) two-dimensional visualization types-Table, Line Chart, Bar Chart, Scatterplot, and Pie Chart-across ten common data analysis tasks using two datasets. We find the effectiveness of these visualization types significantly varies across task, suggesting that visualization design would benefit from considering context-dependent effectiveness. Based on our findings, we derive recommendations on which visualizations to choose based on different tasks. We finally train a decision tree on the data we collected to drive a recommender, showcasing how to effectively engineer experimental user data into practical visualization systems.
Collapse
|
20
|
Saket B, Endert A, Rhyne TM. Demonstrational Interaction for Data Visualization. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2019; 39:67-72. [PMID: 31034400 DOI: 10.1109/mcg.2019.2903711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recently, there has been an increasing trend to extend the demonstrational interaction paradigm to visualization tools. As more analytic operations can be performed by demonstration, new user tasks can be supported. In this paper, we discuss the properties of tasks where the by-demonstration paradigm can be effective and describe the main components needed to implement the demonstrational paradigm in visualization tools.
Collapse
|
21
|
Orban D, Keefe DF, Biswas A, Ahrens J, Rogers D. Drag and Track: A Direct Manipulation Interface for Contextualizing Data Instances within a Continuous Parameter Space. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:256-266. [PMID: 30136980 DOI: 10.1109/tvcg.2018.2865051] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a direct manipulation technique that allows material scientists to interactively highlight relevant parameterized simulation instances located in dimensionally reduced spaces, enabling a user-defined understanding of a continuous parameter space. Our goals are two-fold: first, to build a user-directed intuition of dimensionally reduced data, and second, to provide a mechanism for creatively exploring parameter relationships in parameterized simulation sets, called ensembles. We start by visualizing ensemble data instances in dimensionally reduced scatter plots. To understand these abstract views, we employ user-defined virtual data instances that, through direct manipulation, search an ensemble for similar instances. Users can create multiple of these direct manipulation queries to visually annotate the spaces with sets of highlighted ensemble data instances. User-defined goals are therefore translated into custom illustrations that are projected onto the dimensionally reduced spaces. Combined forward and inverse searches of the parameter space follow naturally allowing for continuous parameter space prediction and visual query comparison in the context of an ensemble. The potential for this visualization technique is confirmed via expert user feedback for a shock physics application and synthetic model analysis.
Collapse
|
22
|
Sarvghad A, Saket B, Endert A, Weibel N. Embedded Merge & Split: Visual Adjustment of Data Grouping. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:800-809. [PMID: 30138910 DOI: 10.1109/tvcg.2018.2865075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Data grouping is among the most frequently used operations in data visualization. It is the process through which relevant information is gathered, simplified, and expressed in summary form. Many popular visualization tools support automatic grouping of data (e.g., dividing up a numerical variable into bins). Although grouping plays a pivotal role in supporting data exploration, further adjustment and customization of auto-generated grouping criteria is non-trivial. Such adjustments are currently performed either programmatically or through menus and dialogues which require specific parameter adjustments over several steps. In response, we introduce Embedded Merge & Split (EMS), a new interaction technique for direct adjustment of data grouping criteria. We demonstrate how the EMS technique can be designed to directly manipulate width and position in bar charts and histograms, as a means for adjustment of data grouping criteria. We also offer a set of design guidelines for supporting EMS. Finally, we present the results of two user studies, providing initial evidence that EMS can significantly reduce interaction time compared to WIMP-based technique and was subjectively preferred by participants.
Collapse
|
23
|
Saket B, Srinivasan A, Ragan ED, Endert A. Evaluating Interactive Graphical Encodings for Data Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:1316-1330. [PMID: 28362588 DOI: 10.1109/tvcg.2017.2680452] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
User interfaces for data visualization often consist of two main components: control panels for user interaction and visual representation. A recent trend in visualization is directly embedding user interaction into the visual representations. For example, instead of using control panels to adjust visualization parameters, users can directly adjust basic graphical encodings (e.g., changing distances between points in a scatterplot) to perform similar parameterizations. However, enabling embedded interactions for data visualization requires a strong understanding of how user interactions influence the ability to accurately control and perceive graphical encodings. In this paper, we study the effectiveness of these graphical encodings when serving as the method for interaction. Our user study includes 12 interactive graphical encodings. We discuss the results in terms of task performance and interaction effectiveness metrics.
Collapse
|
24
|
Mei H, Ma Y, Wei Y, Chen W. The design space of construction tools for information visualization: A survey. ACTA ACUST UNITED AC 2018. [DOI: 10.1016/j.jvlc.2017.10.001] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
25
|
Sarikaya A, Gleicher M. Scatterplots: Tasks, Data, and Designs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:402-412. [PMID: 28866528 DOI: 10.1109/tvcg.2017.2744184] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Traditional scatterplots fail to scale as the complexity and amount of data increases. In response, there exist many design options that modify or expand the traditional scatterplot design to meet these larger scales. This breadth of design options creates challenges for designers and practitioners who must select appropriate designs for particular analysis goals. In this paper, we help designers in making design choices for scatterplot visualizations. We survey the literature to catalog scatterplot-specific analysis tasks. We look at how data characteristics influence design decisions. We then survey scatterplot-like designs to understand the range of design options. Building upon these three organizations, we connect data characteristics, analysis tasks, and design choices in order to generate challenges, open questions, and example best practices for the effective design of scatterplots.
Collapse
|
26
|
Srinivasan A, Park H, Endert A, Basole RC. Graphiti: Interactive Specification of Attribute-Based Edges for Network Modeling and Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:226-235. [PMID: 28866561 DOI: 10.1109/tvcg.2017.2744843] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Network visualizations, often in the form of node-link diagrams, are an effective means to understand relationships between entities, discover entities with interesting characteristics, and to identify clusters. While several existing tools allow users to visualize pre-defined networks, creating these networks from raw data remains a challenging task, often requiring users to program custom scripts or write complex SQL commands. Some existing tools also allow users to both visualize and model networks. Interaction techniques adopted by these tools often assume users know the exact conditions for defining edges in the resulting networks. This assumption may not always hold true, however. In cases where users do not know much about attributes in the dataset or when there are several attributes to choose from, users may not know which attributes they could use to formulate linking conditions. We propose an alternate interaction technique to model networks that allows users to demonstrate to the system a subset of nodes and links they wish to see in the resulting network. The system, in response, recommends conditions that can be used to model networks based on the specified nodes and links. In this paper, we show how such a demonstration-based interaction technique can be used to model networks by employing it in a prototype tool, Graphiti. Through multiple usage scenarios, we show how Graphiti not only allows users to model networks from a tabular dataset but also facilitates updating a pre-defined network with additional edge types.
Collapse
|
27
|
Wall E, Das S, Chawla R, Kalidindi B, Brown ET, Endert A. Podium: Ranking Data Using Mixed-Initiative Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:288-297. [PMID: 28866565 DOI: 10.1109/tvcg.2017.2745078] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
People often rank and order data points as a vital part of making decisions. Multi-attribute ranking systems are a common tool used to make these data-driven decisions. Such systems often take the form of a table-based visualization in which users assign weights to the attributes representing the quantifiable importance of each attribute to a decision, which the system then uses to compute a ranking of the data. However, these systems assume that users are able to quantify their conceptual understanding of how important particular attributes are to a decision. This is not always easy or even possible for users to do. Rather, people often have a more holistic understanding of the data. They form opinions that data point A is better than data point B but do not necessarily know which attributes are important. To address these challenges, we present a visual analytic application to help people rank multi-variate data points. We developed a prototype system, Podium, that allows users to drag rows in the table to rank order data points based on their perception of the relative value of the data. Podium then infers a weighting model using Ranking SVM that satisfies the user's data preferences as closely as possible. Whereas past systems help users understand the relationships between data points based on changes to attribute weights, our approach helps users to understand the attributes that might inform their understanding of the data. We present two usage scenarios to describe some of the potential uses of our proposed technique: (1) understanding which attributes contribute to a user's subjective preferences for data, and (2) deconstructing attributes of importance for existing rankings. Our proposed approach makes powerful machine learning techniques more usable to those who may not have expertise in these areas.
Collapse
|
28
|
|