1
|
Kim J, Lee H, Nguyen DM, Shin M, Kwon BC, Ko S, Elmqvist N. DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:973-983. [PMID: 39255094 DOI: 10.1109/tvcg.2024.3456340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Comics are an effective method for sequential data-driven storytelling, especially for dynamic graphs-graphs whose vertices and edges change over time. However, manually creating such comics is currently time-consuming, complex, and error-prone. In this paper, we propose DG COMICS, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build and annotate comics. The tool uses a newly developed hierarchical clustering algorithm to segment consecutive snapshots of dynamic graphs while preserving their chronological order. It also presents rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report the results of a user study and an expert review.
Collapse
|
2
|
Rahman MD, Quadri GJ, Doppalapudi B, Szafir DA, Rosen P. A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:360-370. [PMID: 39250402 DOI: 10.1109/tvcg.2024.3456359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative sensemaking, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explored three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights. All supplemental materials are available at https://shorturl.at/bAGM1.
Collapse
|
3
|
Dhanoa V, Hinterreiter A, Fediuk V, Elmqvist N, Groller E, Streit M. D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:721-731. [PMID: 39259628 DOI: 10.1109/tvcg.2024.3456347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions available. Authoring such an onboarding experience is time-consuming and requires significant knowledge and little guidance on how best to complete this task. Depending on their levels of expertise, end users being onboarded to a new dashboard can be either confused and overwhelmed or disinterested and disengaged. We propose interactive dashboard tours (D-Tours) as semi-automated onboarding experiences that preserve the agency of users with various levels of expertise to keep them interested and engaged. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path through onboarding. We have implemented the concept in a tool called D-TOUR Prototype, which allows authors to craft custom interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (e.g., video, audio, and highlighting) or new narratives to produce an onboarding experience tailored to an individual user. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. Our evaluation shows that authors found the automation in the D-Tour Prototype helpful and time-saving, and users found the created tours engaging and intuitive. This paper and all supplemental materials are available at https://osf.io/6fbjp/.
Collapse
|
4
|
Yan Y, Hou Y, Xiao Y, Zhang R, Wang Q. KNowNEt:Guided Health Information Seeking from LLMs via Knowledge Graph Integration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:547-557. [PMID: 39255106 PMCID: PMC11875928 DOI: 10.1109/tvcg.2024.3456364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. Specifically, for enhanced accuracy, KnowNet extracts triples (e.g., entities and their relations) from LLM outputs and maps them into the validated information and supported evidence in external KGs. For structured exploration, KnowNet provides next-step recommendations based on the neighborhood of the currently explored entities in KGs, aiming to guide a comprehensive understanding without overlooking critical aspects. To enable reasoning with both the structured data in KGs and the unstructured outputs from LLMs, KnowNet conceptualizes the understanding of a subject as the gradual construction of graph visualization. A progressive graph visualization is introduced to monitor past inquiries, and bridge the current query with the exploration history and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.
Collapse
Affiliation(s)
- Youfu Yan
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, USA
| | - Yu Hou
- Medical School, University of Minnesota, Twin Cities, MN, USA
| | - Yongkang Xiao
- Medical School, University of Minnesota, Twin Cities, MN, USA
| | - Rui Zhang
- Medical School, University of Minnesota, Twin Cities, MN, USA
| | - Qianwen Wang
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, USA
| |
Collapse
|
5
|
Burns A, Lee C, On T, Xiong C, Peck E, Mahyar N. From Invisible to Visible: Impacts of Metadata in Communicative Data Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:3427-3443. [PMID: 37015379 DOI: 10.1109/tvcg.2022.3231716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Leaving the context of visualizations invisible can have negative impacts on understanding and transparency. While common wisdom suggests that recontextualizing visualizations with metadata (e.g., disclosing the data source or instructions for decoding the visualizations' encoding) may counter these effects, the impact remains largely unknown. To fill this gap, we conducted two experiments. In Experiment 1, we explored how chart type, topic, and user goal impacted which categories of metadata participants deemed most relevant. We presented 64 participants with four real-world visualizations. For each visualization, participants were given four goals and selected the type of metadata they most wanted from a set of 18 types. Our results indicated that participants were most interested in metadata which explained the visualization's encoding for goals related to understanding and metadata about the source of the data for assessing trustworthiness. In Experiment 2, we explored how these two types of metadata impact transparency, trustworthiness and persuasiveness, information relevance, and understanding. We asked 144 participants to explain the main message of two pairs of visualizations (one with metadata and one without); rate them on scales of transparency and relevance; and then predict the likelihood that they were selected for a presentation to policymakers. Our results suggested that visualizations with metadata were perceived as more thorough than those without metadata, but similarly relevant, accurate, clear, and complete. Additionally, we found that metadata did not impact the accuracy of the information extracted from visualizations, but may have influenced which information participants remembered as important or interesting.
Collapse
|
6
|
Wang Q, L'Yi S, Gehlenborg N. DRAVA: Aligning Human Concepts with Machine Learning Latent Dimensions for the Visual Exploration of Small Multiples. PROCEEDINGS OF THE SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS. CHI CONFERENCE 2023; 2023:833. [PMID: 38074525 PMCID: PMC10707479 DOI: 10.1145/3544548.3581127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/31/2024]
Abstract
Latent vectors extracted by machine learning (ML) are widely used in data exploration (e.g., t-SNE) but suffer from a lack of interpretability. While previous studies employed disentangled representation learning (DRL) to enable more interpretable exploration, they often overlooked the potential mismatches between the concepts of humans and the semantic dimensions learned by DRL. To address this issue, we propose Drava, a visual analytics system that supports users in 1) relating the concepts of humans with the semantic dimensions of DRL and identifying mismatches, 2) providing feedback to minimize the mismatches, and 3) obtaining data insights from concept-driven exploration. Drava provides a set of visualizations and interactions based on visual piles to help users understand and refine concepts and conduct concept-driven exploration. Meanwhile, Drava employs a concept adaptor model to fine-tune the semantic dimensions of DRL based on user refinement. The usefulness of Drava is demonstrated through application scenarios and experimental validation.
Collapse
Affiliation(s)
| | - Sehi L'Yi
- Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
7
|
Sun M, Cai L, Cui W, Wu Y, Shi Y, Cao N. Erato: Cooperative Data Story Editing via Fact Interpolation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:983-993. [PMID: 36155449 DOI: 10.1109/tvcg.2022.3209428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As an effective form of narrative visualization, visual data stories are widely used in data-driven storytelling to communicate complex insights and support data understanding. Although important, they are difficult to create, as a variety of interdisciplinary skills, such as data analysis and design, are required. In this work, we introduce Erato, a human-machine cooperative data story editing system, which allows users to generate insightful and fluent data stories together with the computer. Specifically, Erato only requires a number of keyframes provided by the user to briefly describe the topic and structure of a data story. Meanwhile, our system leverages a novel interpolation algorithm to help users insert intermediate frames between the keyframes to smooth the transition. We evaluated the effectiveness and usefulness of the Erato system via a series of evaluations including a Turing test, a controlled user study, a performance validation, and interviews with three expert users. The evaluation results showed that the proposed interpolation technique was able to generate coherent story content and help users create data stories more efficiently.
Collapse
|
8
|
Wu A, Wang Y, Shu X, Moritz D, Cui W, Zhang H, Zhang D, Qu H. AI4VIS: Survey on Artificial Intelligence Approaches for Data Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:5049-5070. [PMID: 34310306 DOI: 10.1109/tvcg.2021.3099002] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Visualizations themselves have become a data format. Akin to other data formats such as text and images, visualizations are increasingly created, stored, shared, and (re-)used with artificial intelligence (AI) techniques. In this survey, we probe the underlying vision of formalizing visualizations as an emerging data format and review the recent advance in applying AI techniques to visualization data (AI4VIS). We define visualization data as the digital representations of visualizations in computers and focus on data visualization (e.g., charts and infographics). We build our survey upon a corpus spanning ten different fields in computer science with an eye toward identifying important common interests. Our resulting taxonomy is organized around WHAT is visualization data and its representation, WHY and HOW to apply AI to visualization data. We highlight a set of common tasks that researchers apply to the visualization data and present a detailed discussion of AI approaches developed to accomplish those tasks. Drawing upon our literature review, we discuss several important research questions surrounding the management and exploitation of visualization data, as well as the role of AI in support of those processes. We make the list of surveyed papers and related material available online at.
Collapse
|
9
|
Liu R, Wang H, Zhang C, Chen X, Wang L, Ji G, Zhao B, Mao Z, Yang D. Narrative Scientific Data Visualization in an Immersive Environment. Bioinformatics 2021; 37:2033–2041. [PMID: 33538809 DOI: 10.1093/bioinformatics/btab052] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 01/09/2021] [Accepted: 01/21/2021] [Indexed: 11/14/2022] Open
Abstract
MOTIVATION Narrative visualization for scientific data explorations can help users better understand the domain knowledge, because narrative visualizations often present a sequence of facts and observations linked together by a unifying theme or argument. Narrative visualization in immersive environments can provide users with an intuitive experience to interactively explore the scientific data, because immersive environments provide a brand new strategy for interactive scientific data visualization and exploration. However, it is challenging to develop narrative scientific visualization in immersive environments. In this paper, we propose an immersive narrative visualization tool to create and customize scientific data explorations for ordinary users with little knowledge about programming on scientific visualization, They are allowed to define POIs (point of interests) conveniently by the handler of an immersive device. RESULTS Automatic exploration animations with narrative annotations can be generated by the gradual transitions between consecutive POI pairs. Besides, interactive slicing can be also controlled by device handler. Evaluations including user study and case study are designed and conducted to show the usability and effectiveness of the proposed tool. AVAILABILITY Related information can be accessed at: https://dabigtou.github.io/richenliu/.
Collapse
Affiliation(s)
- Richen Liu
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Hailong Wang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Chuyu Zhang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Xiaojian Chen
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Lijun Wang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Genlin Ji
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Bin Zhao
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Zhiwei Mao
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| | - Dan Yang
- School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University, Nanjing, 210023, China
| |
Collapse
|
10
|
Wang Q, Xu Z, Chen Z, Wang Y, Liu S, Qu H. Visual Analysis of Discrimination in Machine Learning. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1470-1480. [PMID: 33048751 DOI: 10.1109/tvcg.2020.3030471] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The growing use of automated decision-making in critical applications, such as crime prediction and college admission, has raised questions about fairness in machine learning. How can we decide whether different treatments are reasonable or discriminatory? In this paper, we investigate discrimination in machine learning from a visual analytics perspective and propose an interactive visualization tool, DiscriLens, to support a more comprehensive analysis. To reveal detailed information on algorithmic discrimination, DiscriLens identifies a collection of potentially discriminatory itemsets based on causal modeling and classification rules mining. By combining an extended Euler diagram with a matrix-based visualization, we develop a novel set visualization to facilitate the exploration and interpretation of discriminatory itemsets. A user study shows that users can interpret the visually encoded information in DiscriLens quickly and accurately. Use cases demonstrate that DiscriLens provides informative guidance in understanding and reducing algorithmic discrimination.
Collapse
|
11
|
Shi D, Xu X, Sun F, Shi Y, Cao N. Calliope: Automatic Visual Data Story Generation from a Spreadsheet. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:453-463. [PMID: 33048717 DOI: 10.1109/tvcg.2020.3030403] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Visual data stories shown in the form of narrative visualizations such as a poster or a data video, are frequently used in data-oriented storytelling to facilitate the understanding and memorization of the story content. Although useful, technique barriers, such as data analysis, visualization, and scripting, make the generation of a visual data story difficult. Existing authoring tools rely on users' skills and experiences, which are usually inefficient and still difficult. In this paper, we introduce a novel visual data story generating system, Calliope, which creates visual data stories from an input spreadsheet through an automatic process and facilities the easy revision of the generated story based on an online story editor. Particularly, Calliope incorporates a new logic-oriented Monte Carlo tree search algorithm that explores the data space given by the input spreadsheet to progressively generate story pieces (i.e., data facts) and organize them in a logical order. The importance of data facts is measured based on information theory, and each data fact is visualized in a chart and captioned by an automatically generated description. We evaluate the proposed technique through three example stories, two controlled experiments, and a series of interviews with 10 domain experts. Our evaluation shows that Calliope is beneficial to efficient visual data story generation.
Collapse
|
12
|
Rubab S, Tang J, Wu Y. Examining interaction techniques in data visualization authoring tools from the perspective of goals and human cognition: a survey. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-020-00705-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|