1
|
Fan M, Yu J, Weiskopf D, Cao N, Wang HY, Zhou L. Visual Analysis of Multi-Outcome Causal Graphs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:656-666. [PMID: 39255125 DOI: 10.1109/tvcg.2024.3456346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
We introduce a visual analysis method for multiple causal graphs with different outcome variables, namely, multi-outcome causal graphs. Multi-outcome causal graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causal graph of a single o utcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causal graphs. In our visual analysis approach, analysts start by building individual causal graphs for each outcome variable, and then, multi-outcome causal graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causal graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.
Collapse
|
2
|
Piccolotto N, Wallinger M, Miksch S, Bogl M. UnDRground Tubes: Exploring Spatial Data with Multidimensional Projections and Set Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:196-206. [PMID: 39250399 DOI: 10.1109/tvcg.2024.3456314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to methods such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial methods, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.
Collapse
|
3
|
Yu P, Nordman A, Koc-Januchta M, Schonborn K, Besancon L, Vrotsou K. Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:831-841. [PMID: 39255130 DOI: 10.1109/tvcg.2024.3456187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
We present a visual analytics approach for multi-level visual exploration of users' interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom's taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as "cascading" and "nested-loop", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.
Collapse
|
4
|
Eckelt K, Gadhave K, Lex A, Streit M. Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1213-1223. [PMID: 39312426 DOI: 10.1109/tvcg.2024.3456186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn.
Collapse
|
5
|
Boggust A, Sivaraman V, Assogba Y, Ren D, Moritz D, Hohman F. Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:809-819. [PMID: 39255121 DOI: 10.1109/tvcg.2024.3456371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
To deploy machine learning models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called COMPRESS AND COMPARE. Within a single interface, COMPRESS AND COMPARE surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how COMPRESS AND COMPARE supports common compression analysis tasks through two case studies, debugging failed compression on generative language models and identifying compression artifacts in image classification models. We further evaluate COMPRESS AND COMPARE in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression's effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and COMPRESS AND COMPARE visualizations that may generalize to broader model comparison tasks.
Collapse
|
6
|
van den Brandt A, Jonkheer EM, van Workum DJM, van de Wetering H, Smit S, Vilanova A. PanVA: Pangenomic Variant Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4895-4909. [PMID: 37267130 DOI: 10.1109/tvcg.2023.3282364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Genomics researchers increasingly use multiple reference genomes to comprehensively explore genetic variants underlying differences in detectable characteristics between organisms. Pangenomes allow for an efficient data representation of multiple related genomes and their associated metadata. However, current visual analysis approaches for exploring these complex genotype-phenotype relationships are often based on single reference approaches or lack adequate support for interpreting the variants in the genomic context with heterogeneous (meta)data. This design study introduces PanVA, a visual analytics design for pangenomic variant analysis developed with the active participation of genomics researchers. The design uniquely combines tailored visual representations with interactions such as sorting, grouping, and aggregation, allowing users to navigate and explore different perspectives on complex genotype-phenotype relations. Through evaluation in the context of plants and pathogen research, we show that PanVA helps researchers explore variants in genes and generate hypotheses about their role in phenotypic variation.
Collapse
|
7
|
Bearfield CX, Stokes C, Lovett A, Franconeri S. What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5097-5110. [PMID: 37792647 DOI: 10.1109/tvcg.2023.3289292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.
Collapse
|
8
|
Zeng W, Chen X, Hou Y, Shao L, Chu Z, Chang R. Semi-Automatic Layout Adaptation for Responsive Multiple-View Visualization Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:3798-3812. [PMID: 37022242 DOI: 10.1109/tvcg.2023.3240356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multiple-view (MV) visualizations have become ubiquitous for visual communication and exploratory data visualization. However, most existing MV visualizations are designed for the desktop, which can be unsuitable for the continuously evolving displays of varying screen sizes. In this article, we present a two-stage adaptation framework that supports the automated retargeting and semi-automated tailoring of a desktop MV visualization for rendering on devices with displays of varying sizes. First, we cast layout retargeting as an optimization problem and propose a simulated annealing technique that can automatically preserve the layout of multiple views. Second, we enable fine-tuning for the visual appearance of each view, using a rule-based auto configuration method complemented with an interactive interface for chart-oriented encoding adjustment. To demonstrate the feasibility and expressivity of our proposed approach, we present a gallery of MV visualizations that have been adapted from the desktop to small displays. We also report the result of a user study comparing visualizations generated using our approach with those by existing methods. The outcome indicates that the participants generally prefer visualizations generated using our approach and find them to be easier to use.
Collapse
|
9
|
Oral E, Chawla R, Wijkstra M, Mahyar N, Dimara E. From Information to Choice: A Critical Inquiry Into Visualization Tools for Decision Making. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:359-369. [PMID: 37871054 DOI: 10.1109/tvcg.2023.3326593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
In the face of complex decisions, people often engage in a three-stage process that spans from (1) exploring and analyzing pertinent information (intelligence); (2) generating and exploring alternative options (design); and ultimately culminating in (3) selecting the optimal decision by evaluating discerning criteria (choice). We can fairly assume that all good visualizations aid in the "intelligence" stage by enabling data exploration and analysis. Yet, to what degree and how do visualization systems currently support the other decision making stages, namely "design" and "choice"? To further explore this question, we conducted a comprehensive review of decision-focused visualization tools by examining publications in major visualization journals and conferences, including VIS, EuroVis, and CHI, spanning all available years. We employed a deductive coding method and in-depth analysis to assess whether and how visualization tools support design and choice. Specifically, we examined each visualization tool by (i) its degree of visibility for displaying decision alternatives, criteria, and preferences, and (ii) its degree of flexibility for offering means to manipulate the decision alternatives, criteria, and preferences with interactions such as adding, modifying, changing mapping, and filtering. Our review highlights the opportunities and challenges that decision-focused visualization tools face in realizing their full potential to support all stages of the decision making process. It reveals a surprising scarcity of tools that support all stages, and while most tools excel in offering visibility for decision criteria and alternatives, the degree of flexibility to manipulate these elements is often limited, and the lack of tools that accommodate decision preferences and their elicitation is notable. Based on our findings, to better support the choice stage, future research could explore enhancing flexibility levels and variety, exploring novel visualization paradigms, increasing algorithmic support, and ensuring that this automation is user-controlled via the enhanced flexibility I evels. Our curated list of the 88 surveyed visualization tools is available in the OSF link (https://osf.io/nrasz/?view_only=b92a90a34ae241449b5f2cd33383bfcb).
Collapse
|
10
|
Piccolotto N, Bögl M, Miksch S. Visual Parameter Space Exploration in Time and Space. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2023; 42:e14785. [PMID: 38505647 PMCID: PMC10947302 DOI: 10.1111/cgf.14785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Computational models, such as simulations, are central to a wide range of fields in science and industry. Those models take input parameters and produce some output. To fully exploit their utility, relations between parameters and outputs must be understood. These include, for example, which parameter setting produces the best result (optimization) or which ranges of parameter settings produce a wide variety of results (sensitivity). Such tasks are often difficult to achieve for various reasons, for example, the size of the parameter space, and supported with visual analytics. In this paper, we survey visual parameter space exploration (VPSE) systems involving spatial and temporal data. We focus on interactive visualizations and user interfaces. Through thematic analysis of the surveyed papers, we identify common workflow steps and approaches to support them. We also identify topics for future work that will help enable VPSE on a greater variety of computational models.
Collapse
Affiliation(s)
- Nikolaus Piccolotto
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| | - Markus Bögl
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| | - Silvia Miksch
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| |
Collapse
|
11
|
Kesavan SP, Bhatia H, Bhatele A, Brink S, Pearce O, Gamblin T, Bremer PT, Ma KL. Scalable Comparative Visualization of Ensembles of Call Graphs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1691-1704. [PMID: 34797765 DOI: 10.1109/tvcg.2021.3129414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Optimizing the performance of large-scale parallel codes is critical for efficient utilization of computing resources. Code developers often explore various execution parameters, such as hardware configurations, system software choices, and application parameters, and are interested in detecting and understanding bottlenecks in different executions. They often collect hierarchical performance profiles represented as call graphs, which combine performance metrics with their execution contexts. The crucial task of exploring multiple call graphs together is tedious and challenging because of the many structural differences in the execution contexts and significant variability in the collected performance metrics (e.g., execution runtime). In this paper, we present Ensemble CallFlow to support the exploration of ensembles of call graphs using new types of visualizations, analysis, graph operations, and features. We introduce ensemble-Sankey, a new visual design that combines the strengths of resource-flow (Sankey) and box-plot visualization techniques. Whereas the resource-flow visualization can easily and intuitively describe the graphical nature of the call graph, the box plots overlaid on the nodes of Sankey convey the performance variability within the ensemble. Our interactive visual interface provides linked views to help explore ensembles of call graphs, e.g., by facilitating the analysis of structural differences, and identifying similar or distinct call graphs. We demonstrate the effectiveness and usefulness of our design through case studies on large-scale parallel codes.
Collapse
|
12
|
WaterExcVA: a system for exploring and visualizing data exception in urban water supply. J Vis (Tokyo) 2023. [DOI: 10.1007/s12650-023-00911-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
|
13
|
Choi J, Lee SE, Lee Y, Cho E, Chang S, Jeong WK. DXplorer: A Unified Visualization Framework for Interactive Dendritic Spine Analysis Using 3D Morphological Features. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1424-1437. [PMID: 34591770 DOI: 10.1109/tvcg.2021.3116656] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Dendritic spines are dynamic, submicron-scale protrusions on neuronal dendrites that receive neuronal inputs. Morphological changes in the dendritic spine often reflect alterations in physiological conditions and are indicators of various neuropsychiatric conditions. However, owing to the highly dynamic and heterogeneous nature of spines, accurate measurement and objective analysis of spine morphology are major challenges in neuroscience research. Most conventional approaches for analyzing dendritic spines are based on two-dimensional (2D) images, which barely reflect the actual three-dimensional (3D) shapes. Although some recent studies have attempted to analyze spines with various 3D-based features, it is still difficult to objectively categorize and analyze spines based on 3D morphology. Here, we propose a unified visualization framework for an interactive 3D dendritic spine analysis system, DXplorer, that displays 3D rendering of spines and plots the high-dimensional features extracted from the 3D mesh of spines. With this system, users can perform the clustering of spines interactively and explore and analyze dendritic spines based on high-dimensional features. We propose a series of high-dimensional morphological features extracted from a 3D mesh of dendritic spines. In addition, an interactive machine learning classifier with visual exploration and user feedback using an interactive 3D mesh grid view ensures a more precise classification based on the spine phenotype. A user study and two case studies were conducted to quantitatively verify the performance and usability of the DXplorer. We demonstrate that the system performs the entire analytic process effectively and provides high-quality, accurate, and objective analysis.
Collapse
|
14
|
Mota R, Ferreira N, Silva JD, Horga M, Lage M, Ceferino L, Alim U, Sharlin E, Miranda F. A Comparison of Spatiotemporal Visualizations for 3D Urban Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1277-1287. [PMID: 36166521 DOI: 10.1109/tvcg.2022.3209474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recent technological innovations have led to an increase in the availability of 3D urban data, such as shadow, noise, solar potential, and earthquake simulations. These spatiotemporal datasets create opportunities for new visualizations to engage experts from different domains to study the dynamic behavior of urban spaces in this under explored dimension. However, designing 3D spatiotemporal urban visualizations is challenging, as it requires visual strategies to support analysis of time-varying data referent to the city geometry. Although different visual strategies have been used in 3D urban visual analytics, the question of how effective these visual designs are at supporting spatiotemporal analysis on building surfaces remains open. To investigate this, in this paper we first contribute a series of analytical tasks elicited after interviews with practitioners from three urban domains. We also contribute a quantitative user study comparing the effectiveness of four representative visual designs used to visualize 3D spatiotemporal urban data: spatial juxtaposition, temporal juxtaposition, linked view, and embedded view. Participants performed a series of tasks that required them to identify extreme values on building surfaces over time. Tasks varied in granularity for both space and time dimensions. Our results demonstrate that participants were more accurate using plot-based visualizations (linked view, embedded view) but faster using color-coded visualizations (spatial juxtaposition, temporal juxtaposition). Our results also show that, with increasing task complexity, plot-based visualizations perform better in preserving efficiency (time, accuracy) compared to color-coded visualizations. Based on our findings, we present a set of takeaways with design recommendations for 3D spatiotemporal urban visualizations for researchers and practitioners. Lastly, we report on a series of interviews with four practitioners, and their feedback and suggestions for further work on the visualizations to support 3D spatiotemporal urban data analysis.
Collapse
|
15
|
Gaba A, Setlur V, Srinivasan A, Hoffswell J, Xiong C. Comparison Conundrum and the Chamber of Visualizations: An Exploration of How Language Influences Visual Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1211-1221. [PMID: 36155465 DOI: 10.1109/tvcg.2022.3209456] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The language for expressing comparisons is often complex and nuanced, making supporting natural language-based visual comparison a non-trivial task. To better understand how people reason about comparisons in natural language, we explore a design space of utterances for comparing data entities. We identified different parameters of comparison utterances that indicate what is being compared (i.e., data variables and attributes) as well as how these parameters are specified (i.e., explicitly or implicitly). We conducted a user study with sixteen data visualization experts and non-experts to investigate how they designed visualizations for comparisons in our design space. Based on the rich set of visualization techniques observed, we extracted key design features from the visualizations and synthesized them into a subset of sixteen representative visualization designs. We then conducted a follow-up study to validate user preferences for the sixteen representative visualizations corresponding to utterances in our design space. Findings from these studies suggest guidelines and future directions for designing natural language interfaces and recommendation tools to better support natural language comparisons in visual analytics.
Collapse
|
16
|
Zhou J, Wang X, Wong JK, Wang H, Wang Z, Yang X, Yan X, Feng H, Qu H, Ying H, Chen W. DPVisCreator: Incorporating Pattern Constraints to Privacy-preserving Visualizations via Differential Privacy. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:809-819. [PMID: 36166552 DOI: 10.1109/tvcg.2022.3209391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Data privacy is an essential issue in publishing data visualizations. However, it is challenging to represent multiple data patterns in privacy-preserving visualizations. The prior approaches target specific chart types or perform an anonymization model uniformly without considering the importance of data patterns in visualizations. In this paper, we propose a visual analytics approach that facilitates data custodians to generate multiple private charts while maintaining user-preferred patterns. To this end, we introduce pattern constraints to model users' preferences over data patterns in the dataset and incorporate them into the proposed Bayesian network-based Differential Privacy (DP) model PriVis. A prototype system, DPVisCreator, is developed to assist data custodians in implementing our approach. The effectiveness of our approach is demonstrated with quantitative evaluation of pattern utility under the different levels of privacy protection, case studies, and semi-structured expert interviews.
Collapse
|
17
|
Sevastjanova R, Cakmak E, Ravfogel S, Cotterell R, El-Assady M. Visual Comparison of Language Model Adaptation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1178-1188. [PMID: 36166530 DOI: 10.1109/tvcg.2022.3209458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Neural language models are widely used; however, their model parameters often need to be adapted to the specific domains and tasks of an application, which is time- and resource-consuming. Thus, adapters have recently been introduced as a lightweight alternative for model adaptation. They consist of a small set of task-specific parameters with a reduced training time and simple parameter composition. The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces. To help developers overcome these challenges, we provide a twofold contribution. First, in close collaboration with NLP researchers, we conducted a requirement analysis for an approach supporting adapter evaluation and detected, among others, the need for both intrinsic (i.e., embedding similarity-based) and extrinsic (i.e., prediction-based) explanation methods. Second, motivated by the gathered requirements, we designed a flexible visual analytics workspace that enables the comparison of adapter properties. In this paper, we discuss several design iterations and alternatives for interactive, comparative visual explanation methods. Our comparative visualizations show the differences in the adapted embedding vectors and prediction outcomes for diverse human-interpretable concepts (e.g., person names, human qualities). We evaluate our workspace through case studies and show that, for instance, an adapter trained on the language debiasing task according to context-0 (decontextualized) embeddings introduces a new type of bias where words (even gender-independent words such as countries) become more similar to female- than male pronouns. We demonstrate that these are artifacts of context-0 embeddings, and the adapter effectively eliminates the gender information from the contextualized word representations.
Collapse
|
18
|
Aichem M, Klein K, Czauderna T, Garkov D, Zhao J, Li J, Schreiber F. Towards a hybrid user interface for the visual exploration of large biomolecular networks using virtual reality. J Integr Bioinform 2022; 19:jib-2022-0034. [PMID: 36215728 PMCID: PMC9800044 DOI: 10.1515/jib-2022-0034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 07/06/2022] [Indexed: 01/09/2023] Open
Abstract
Biomolecular networks, including genome-scale metabolic models (GSMMs), assemble the knowledge regarding the biological processes that happen inside specific organisms in a way that allows for analysis, simulation, and exploration. With the increasing availability of genome annotations and the development of powerful reconstruction tools, biomolecular networks continue to grow ever larger. While visual exploration can facilitate the understanding of such networks, the network sizes represent a major challenge for current visualisation systems. Building on promising results from the area of immersive analytics, which among others deals with the potential of immersive visualisation for data analysis, we present a concept for a hybrid user interface that combines a classical desktop environment with a virtual reality environment for the visual exploration of large biomolecular networks and corresponding data. We present system requirements and design considerations, describe a resulting concept, an envisioned technical realisation, and a systems biology usage scenario. Finally, we discuss remaining challenges.
Collapse
Affiliation(s)
- Michael Aichem
- Department of Computer and Information Science, University of Konstanz, Konstanz, Germany
| | - Karsten Klein
- Department of Computer and Information Science, University of Konstanz, Konstanz, Germany
| | - Tobias Czauderna
- Faculty of Applied Computer Sciences & Biosciences, University of Applied Sciences Mittweida, Mittweida, Germany
| | - Dimitar Garkov
- Department of Computer and Information Science, University of Konstanz, Konstanz, Germany
| | - Jinxin Zhao
- Infection Program and Department of Microbiology, Biomedicine Discovery Institute, Monash University, Melbourne, Australia
| | - Jian Li
- Infection Program and Department of Microbiology, Biomedicine Discovery Institute, Monash University, Melbourne, Australia
| | - Falk Schreiber
- Department of Computer and Information Science, University of Konstanz, Konstanz, Germany
- Faculty of Information Technology, Monash University, Melbourne, Australia
| |
Collapse
|
19
|
Joos L, Jaeger-Honz S, Schreiber F, Keim DA, Klein K. Visual Comparison of Networks in VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3651-3661. [PMID: 36048995 DOI: 10.1109/tvcg.2022.3203001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Networks are an important means for the representation and analysis of data in a variety of research and application areas. While there are many efficient methods to create layouts for networks to support their visual analysis, approaches for the comparison of networks are still underexplored. Especially when it comes to the comparison of weighted networks, which is an important task in several areas, such as biology and biomedicine, there is a lack of efficient visualization approaches. With the availability of affordable high-quality virtual reality (VR) devices, such as head-mounted displays (HMDs), the research field of immersive analytics emerged and showed great potential for using the new technology for visual data exploration. However, the use of immersive technology for the comparison of networks is still underexplored. With this work, we explore how weighted networks can be visually compared in an immersive VR environment and investigate how visual representations can benefit from the extended 3D design space. For this purpose, we develop different encodings for 3D node-link diagrams supporting the visualization of two networks within a single representation and evaluate them in a pilot user study. We incorporate the results into a more extensive user study comparing node-link representations with matrix representations encoding two networks simultaneously. The data and tasks designed for our experiments are similar to those occurring in real-world scenarios. Our evaluation shows significantly better results for the node-link representations, which is contrary to comparable 2D experiments and indicates a high potential for using VR for the visual comparison of networks.
Collapse
|
20
|
Annanias Y, Zeckzer D, Scheuermann G, Wiegreffe D. An Interactive Decision Support System for Land Reuse Tasks. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:72-83. [PMID: 35594239 DOI: 10.1109/mcg.2022.3175604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Experts face the task of deciding where and how land reuse-transforming previously used areas into landscape and utility areas-can be performed. This decision is based on which area should be used, which restrictions exist, and which conditions have to be fulfilled for reusing this area. Information about the restrictions and the conditions is available as mostly textual, nonspatial data associated to areas overlapping the target areas. Due to the large amount of possible combinations of restrictions and conditions overlapping (partially) the target area, this decision process becomes quite tedious and cumbersome. Moreover, it proves to be useful to identify similar regions that have reached different stages of development within the dataset which in turn allows determining common tasks for these regions. We support the experts in accomplishing these tasks by providing aggregated representations as well as multiple coordinated views together with category filters and selection mechanisms implemented in an interactive decision support system. Textual information is linked to these visualizations enabling the experts to justify their decisions. Evaluating our approach using a standard SUS questionnaire suggests that especially the experts were very satisfied with the interactive decision support system.
Collapse
|
21
|
Halladjian S, Kouril D, Miao H, Groller ME, Viola I, Isenberg T. Multiscale Unfolding: Illustratively Visualizing the Whole Genome at a Glance. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3456-3470. [PMID: 33705319 DOI: 10.1109/tvcg.2021.3065443] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We present Multiscale Unfolding, an interactive technique for illustratively visualizing multiple hierarchical scales of DNA in a single view, showing the genome at different scales and demonstrating how one scale spatially folds into the next. The DNA's extremely long sequential structure-arranged differently on several distinct scale levels-is often lost in traditional 3D depictions, mainly due to its multiple levels of dense spatial packing and the resulting occlusion. Furthermore, interactive exploration of this complex structure is cumbersome, requiring visibility management like cut-aways. In contrast to existing temporally controlled multiscale data exploration, we allow viewers to always see and interact with any of the involved scales. For this purpose we separate the depiction into constant-scale and scale transition zones. Constant-scale zones maintain a single-scale representation, while still linearly unfolding the DNA. Inspired by illustration, scale transition zones connect adjacent constant-scale zones via level unfolding, scaling, and transparency. We thus represent the spatial structure of the whole DNA macro-molecule, maintain its local organizational characteristics, linearize its higher-level organization, and use spatially controlled, understandable interpolation between neighboring scales. We also contribute interaction techniques that provide viewers with a coarse-to-fine control for navigating within our all-scales-in-one-view representations and visual aids to illustrate the size differences. Overall, Multiscale Unfolding allows viewers to grasp the DNA's structural composition from chromosomes to the atoms, with increasing levels of "unfoldedness," and can be applied in data-driven illustration and communication.
Collapse
|
22
|
Heimerl F, Kralj C, Moller T, Gleicher M. embComp: Visual Interactive Comparison of Vector Embeddings. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2953-2969. [PMID: 33347410 DOI: 10.1109/tvcg.2020.3045918] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article introduces embComp, a novel approach for comparing two embeddings that capture the similarity between objects, such as word and document embeddings. We survey scenarios where comparing these embedding spaces is useful. From those scenarios, we derive common tasks, introduce visual analysis methods that support these tasks, and combine them into a comprehensive system. One of embComp's central features are overview visualizations that are based on metrics for measuring differences in the local structure around objects. Summarizing these local metrics over the embeddings provides global overviews of similarities and differences. Detail views allow comparison of the local structure around selected objects and relating this local information to the global views. Integrating and connecting all of these components, embComp supports a range of analysis workflows that help understand similarities and differences between embedding spaces. We assess our approach by applying it in several use cases, including understanding corpora differences via word vector embeddings, and understanding algorithmic differences in generating embeddings.
Collapse
|
23
|
Wu E. View Composition Algebra for Ad Hoc Comparison. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2470-2485. [PMID: 35180082 DOI: 10.1109/tvcg.2022.3152515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Comparison is a core task in visual analysis. Although there are numerous guidelines to help users design effective visualizations to aid known comparison tasks, there are few techniques available when users want to make ad hoc comparisons between marks, trends, or charts during data exploration and visual analysis. For instance, to compare voting count maps from different years, two stock trends in a line chart, or a scatterplot of country GDPs with a textual summary of the average GDP. Ideally, users can directly select the comparison targets and compare them, however what elements of a visualization should be candidate targets, which combinations of targets are safe to compare, and what comparison operations make sense? This article proposes a conceptual model that lets users compose combinations of values, marks, legend elements, and charts using a set of composition operators that summarize, compute differences, merge, and model their operands. We further define a View Composition Algebra (VCA) that is compatible with datacube-based visualizations, derive an interaction design based on this algebra that supports ad hoc visual comparisons, and illustrate its utility through several use cases.
Collapse
|
24
|
Alharbi M, Laramee RS, Cheesman T. TransVis: Integrated Distant and Close Reading of Othello Translations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1397-1414. [PMID: 32746287 DOI: 10.1109/tvcg.2020.3012778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Studying variation among time-evolved translations is a valuable research area for cultural heritage. Understanding how and why translations vary reveals cultural, ideological, and even political influences on literature as well as author relations. In this article, we introduce a novel integrated visual application to support distant and close reading of a collection of Othello translations. We present a new interactive application that provides an alignment overview of all the translations and their correspondences in parallel with smooth zooming and panning capability to integrate distant and close reading within the same view. We provide a range of filtering and selection options to customize the alignment overview as well as focus on specific subsets. Selection and filtering are responsive to expert user preferences and update the analytical text metrics interactively. Also, we introduce a customized view for close reading which preserves the history of selections and the alignment overview state and enables backtracing and re-examining them. Finally, we present a new Term-Level Comparisons view (TLC) to compare and convey relative term weighting in the context of an alignment. Our visual design is guided by, used and evaluated by a domain expert specialist in German translations of Shakespeare.
Collapse
|
25
|
Lohfink AP, Anton SDD, Leitte H, Garth C. Knowledge Rocks: Adding Knowledge Assistance to Visualization Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1117-1127. [PMID: 34591761 DOI: 10.1109/tvcg.2021.3114687] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present Knowledge Rocks, an implementation strategy and guideline for augmenting visualization systems to knowledge-assisted visualization systems, as defined by the KAVA model. Visualization systems become more and more sophisticated. Hence, it is increasingly important to support users with an integrated knowledge base in making constructive choices and drawing the right conclusions. We support the effective reactivation of visualization software resources by augmenting them with knowledge-assistance. To provide a general and yet supportive implementation strategy, we propose an implementation process that bases on an application-agnostic architecture. This architecture is derived from existing knowledge-assisted visualization systems and the KAVA model. Its centerpiece is an ontology that is able to automatically analyze and classify input data, linked to a database to store classified instances. We discuss design decisions and advantages of the KR framework and illustrate its broad area of application in diverse integration possibilities of this architecture into an existing visualization system. In addition, we provide a detailed case study by augmenting an it-security system with knowledge-assistance facilities.
Collapse
|
26
|
Fujiwara T, Wei X, Zhao J, Ma KL. Interactive Dimensionality Reduction for Comparative Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:758-768. [PMID: 34591765 DOI: 10.1109/tvcg.2021.3114807] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Finding the similarities and differences between groups of datasets is a fundamental analysis task. For high-dimensional data, dimensionality reduction (DR) methods are often used to find the characteristics of each group. However, existing DR methods provide limited capability and flexibility for such comparative analysis as each method is designed only for a narrow analysis target, such as identifying factors that most differentiate groups. This paper presents an interactive DR framework where we integrate our new DR method, called ULCA (unified linear comparative analysis), with an interactive visual interface. ULCA unifies two DR schemes, discriminant analysis and contrastive learning, to support various comparative analysis tasks. To provide flexibility for comparative analysis, we develop an optimization algorithm that enables analysts to interactively refine ULCA results. Additionally, the interactive visualization interface facilitates interpretation and refinement of the ULCA results. We evaluate ULCA and the optimization algorithm to show their efficiency as well as present multiple case studies using real-world datasets to demonstrate the usefulness of this framework.
Collapse
|
27
|
Das S, Saket B, Kwon BC, Endert A. Geono-Cluster: Interactive Visual Cluster Analysis for Biologists. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4401-4412. [PMID: 32746262 DOI: 10.1109/tvcg.2020.3002166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Biologists often perform clustering analysis to derive meaningful patterns, relationships, and structures from data instances and attributes. Though clustering plays a pivotal role in biologists' data exploration, it takes non-trivial efforts for biologists to find the best grouping in their data using existing tools. Visual cluster analysis is currently performed either programmatically or through menus and dialogues in many tools, which require parameter adjustments over several steps of trial-and-error. In this article, we introduce Geono-Cluster, a novel visual analysis tool designed to support cluster analysis for biologists who do not have formal data science training. Geono-Cluster enables biologists to apply their domain expertise into clustering results by visually demonstrating how their expected clustering outputs should look like with a small sample of data instances. The system then predicts users' intentions and generates potential clustering results. Our study follows the design study protocol to derive biologists' tasks and requirements, design the system, and evaluate the system with experts on their own dataset. Results of our study with six biologists provide initial evidence that Geono-Cluster enables biologists to create, refine, and evaluate clustering results to effectively analyze their data and gain data-driven insights. At the end, we discuss lessons learned and implications of our study.
Collapse
|
28
|
RallyComparator: visual comparison of the multivariate and spatial stroke sequence in table tennis rally. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00772-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
29
|
Shi L, Hu J, Tan Z, Tao J, Ding J, Jin Y, Wu Y, Thompson P. MV 2Net: Multi-Variate Multi-View Brain Network Comparison over Uncertain Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; PP:4640-4657. [PMID: 34283716 DOI: 10.1109/tvcg.2021.3098123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Visually identifying effective bio-markers from human brain networks poses non-trivial challenges to the field of data visualization and analysis. Existing methods in the literature and neuroscience practice are generally limited to the study of individual connectivity features in the brain (e.g., the strength of neural connection among brain regions). Pairwise comparisons between contrasting subject groups (e.g., the diseased and the healthy controls) are normally performed. The underlying neuroimaging and brain network construction process is assumed to have 100% fidelity. Yet, real-world user requirements on brain network visual comparison lean against these assumptions. In this work, we present MV^2Net, a visual analytics system that tightly integrates multi-variate multi-view visualization for brain network comparison with an interactive wrangling mechanism to deal with data uncertainty. On the analysis side, the system integrates multiple extraction methods on diffusion and geometric connectivity features of brain networks, an anomaly detection algorithm for data quality assessment, single- and multi-connection feature selection methods for bio-marker detection. On the visualization side, novel designs are introduced which optimize network comparisons among contrasting subject groups and related connectivity features. Our design provides level-of-detail comparisons, from juxtaposed and explicit-coding views for subject group comparisons, to high-order composite view for correlation of network comparisons, and to fiber tract detail view for voxel-level comparisons. The proposed techniques are inspired and evaluated in expert studies, as well as through case analyses on diffusion and geometric bio-markers of certain neurology diseases. Results in these experiments demonstrate the effectiveness and superiority of MV^2Net over state-of-the-art approaches.
Collapse
|
30
|
Cao K, Liu M, Su H, Wu J, Zhu J, Liu S. Analyzing the Noise Robustness of Deep Neural Networks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3289-3304. [PMID: 31985427 DOI: 10.1109/tvcg.2020.2969185] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Adversarial examples, generated by adding small but intentionally imperceptible perturbations to normal examples, can mislead deep neural networks (DNNs) to make incorrect predictions. Although much work has been done on both adversarial attack and defense, a fine-grained understanding of adversarial examples is still lacking. To address this issue, we present a visual analysis method to explain why adversarial examples are misclassified. The key is to compare and analyze the datapaths of both the adversarial and normal examples. A datapath is a group of critical neurons along with their connections. We formulate the datapath extraction as a subset selection problem and solve it by constructing and training a neural network. A multi-level visualization consisting of a network-level visualization of data flows, a layer-level visualization of feature maps, and a neuron-level visualization of learned features, has been designed to help investigate how datapaths of adversarial and normal examples diverge and merge in the prediction process. A quantitative evaluation and a case study were conducted to demonstrate the promise of our method to explain the misclassification of adversarial examples.
Collapse
|
31
|
Chen C, Yuan J, Lu Y, Liu Y, Su H, Yuan S, Liu S. OoDAnalyzer: Interactive Analysis of Out-of-Distribution Samples. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3335-3349. [PMID: 32070976 DOI: 10.1109/tvcg.2020.2973258] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
One major cause of performance degradation in predictive models is that the test samples are not well covered by the training data. Such not well-represented samples are called OoD samples. In this article, we propose OoDAnalyzer, a visual analysis approach for interactively identifying OoD samples and explaining them in context. Our approach integrates an ensemble OoD detection method and a grid-based visualization. The detection method is improved from deep ensembles by combining more features with algorithms in the same family. To better analyze and understand the OoD samples in context, we have developed a novel kNN-based grid layout algorithm motivated by Hall's theorem. The algorithm approximates the optimal layout and has O(kN2) time complexity, faster than the grid layout algorithm with overall best performance but O(N3) time complexity. Quantitative evaluation and case studies were performed on several datasets to demonstrate the effectiveness and usefulness of OoDAnalyzer.
Collapse
|
32
|
Tovanich N, Heulot N, Fekete JD, Isenberg P. Visualization of Blockchain Data: A Systematic Review. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3135-3152. [PMID: 31899429 DOI: 10.1109/tvcg.2019.2963018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a systematic review of visual analytics tools used for the analysis of blockchains-related data. The blockchain concept has recently received considerable attention and spurred applications in a variety of domains. We systematically and quantitatively assessed 76 analytics tools that have been proposed in research as well as online by professionals and blockchain enthusiasts. Our classification of these tools distinguishes (1) target blockchains, (2) blockchain data, (3) target audiences, (4) task domains, and (5) visualization types. Furthermore, we look at which aspects of blockchain data have already been explored and point out areas that deserve more investigation in the future.
Collapse
|
33
|
Pflüger H. A language to analyze, describe, and explore collections of visual art. Vis Comput Ind Biomed Art 2021; 4:5. [PMID: 33646448 PMCID: PMC7921272 DOI: 10.1186/s42492-021-00071-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 02/08/2021] [Indexed: 11/14/2022] Open
Abstract
A vast quantity of art in existence today is inaccessible to individuals. If people want to know the different types of art that exist, how individual works are connected, and how works of art are interpreted and discussed in the context of other works, they must utilize means other than simply viewing the art. Therefore, this paper proposes a language to analyze, describe, and explore collections of visual art (LadeCA). LadeCA combines human interpretation and automatic analyses of images, allowing users to assess collections of visual art without viewing every image in them. This paper focuses on the lexical base of LadeCA. It also outlines how collections of visual art can be analyzed, described, and explored using a LadeCA vocabulary. Additionally, the relationship between LadeCA and indexing systems, such as ICONCLASS or AAT, is demonstrated, and ways in which LadeCA and indexing systems can complement each other are highlighted. Video abstract.
Collapse
Affiliation(s)
- Hermann Pflüger
- Institute for Visualization and Interactive Systems, University of Stuttgart, 70569, Stuttgart, Germany.
| |
Collapse
|
34
|
Kim Y, Kim J, Jeon H, Kim YH, Song H, Kim B, Seo J. Githru: Visual Analytics for Understanding Software Development History Through Git Metadata Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:656-666. [PMID: 33048722 DOI: 10.1109/tvcg.2020.3030414] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Git metadata contains rich information for developers to understand the overall context of a large software development project. Thus it can help new developers, managers, and testers understand the history of development without needing to dig into a large pile of unfamiliar source code. However, the current tools for Git visualization are not adequate to analyze and explore the metadata: They focus mainly on improving the usability of Git commands instead of on helping users understand the development history. Furthermore, they do not scale for large and complex Git commit graphs, which can play an important role in understanding the overall development history. In this paper, we present Githru, an interactive visual analytics system that enables developers to effectively understand the context of development history through the interactive exploration of Git metadata. We design an interactive visual encoding idiom to represent a large Git graph in a scalable manner while preserving the topological structures in the Git graph. To enable scalable exploration of a large Git commit graph, we propose novel techniques (graph reconstruction, clustering, and Context-Preserving Squash Merge (CSM) methods) to abstract a large-scale Git commit graph. Based on these Git commit graph abstraction techniques, Githru provides an interactive summary view to help users gain an overview of the development history and a comparison view in which users can compare different clusters of commits. The efficacy of Githru has been demonstrated by case studies with domain experts using real-world, in-house datasets from a large software development team at a major international IT company. A controlled user study with 12 developers comparing Githru to previous tools also confirms the effectiveness of Githru in terms of task completion time.
Collapse
|
35
|
Reipschlager P, Flemisch T, Dachselt R. Personal Augmented Reality for Information Visualization on Large Interactive Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1182-1192. [PMID: 33052863 DOI: 10.1109/tvcg.2020.3030460] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this work we propose the combination of large interactive displays with personal head-mounted Augmented Reality (AR) for information visualization to facilitate data exploration and analysis. Even though large displays provide more display space, they are challenging with regard to perception, effective multi-user support, and managing data density and complexity. To address these issues and illustrate our proposed setup, we contribute an extensive design space comprising first, the spatial alignment of display, visualizations, and objects in AR space. Next, we discuss which parts of a visualization can be augmented. Finally, we analyze how AR can be used to display personal views in order to show additional information and to minimize the mutual disturbance of data analysts. Based on this conceptual foundation, we present a number of exemplary techniques for extending visualizations with AR and discuss their relation to our design space. We further describe how these techniques address typical visualization problems that we have identified during our literature research. To examine our concepts, we introduce a generic AR visualization framework as well as a prototype implementing several example techniques. In order to demonstrate their potential, we further present a use case walkthrough in which we analyze a movie data set. From these experiences, we conclude that the contributed techniques can be useful in exploring and understanding multivariate data. We are convinced that the extension of large displays with AR for information visualization has a great potential for data analysis and sense-making.
Collapse
|
36
|
DeRose JF, Wang J, Berger M. Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1160-1170. [PMID: 33052855 DOI: 10.1109/tvcg.2020.3028976] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Advances in language modeling have led to the development of deep attention-based models that are performant across a wide variety of natural language processing (NLP) problems. These language models are typified by a pre-training process on large unlabeled text corpora and subsequently fine-tuned for specific tasks. Although considerable work has been devoted to understanding the attention mechanisms of pre-trained models, it is less understood how a model's attention mechanisms change when trained for a target NLP task. In this paper, we propose a visual analytics approach to understanding fine-tuning in attention-based language models. Our visualization, Attention Flows, is designed to support users in querying, tracing, and comparing attention within layers, across layers, and amongst attention heads in Transformer-based language models. To help users gain insight on how a classification decision is made, our design is centered on depicting classification-based attention at the deepest layer and how attention from prior layers flows throughout words in the input. Attention Flows supports the analysis of a single model, as well as the visual comparison between pre-trained and fine-tuned models via their similarities and differences. We use Attention Flows to study attention mechanisms in various sentence understanding tasks and highlight how attention evolves to address the nuances of solving these tasks.
Collapse
|
37
|
Ma Y, Fan A, He J, Nelakurthi AR, Maciejewski R. A Visual Analytics Framework for Explaining and Diagnosing Transfer Learning Processes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1385-1395. [PMID: 33035164 DOI: 10.1109/tvcg.2020.3028888] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Many statistical learning models hold an assumption that the training data and the future unlabeled data are drawn from the same distribution. However, this assumption is difficult to fulfill in real-world scenarios and creates barriers in reusing existing labels from similar application domains. Transfer Learning is intended to relax this assumption by modeling relationships between domains, and is often applied in deep learning applications to reduce the demand for labeled data and training time. Despite recent advances in exploring deep learning models with visual analytics tools, little work has explored the issue of explaining and diagnosing the knowledge transfer process between deep learning models. In this paper, we present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks. Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks. Based on a comprehensive requirement and task analysis, we employ descriptive visualization with performance measures and detailed inspections of model behaviors from the statistical, instance, feature, and model structure levels. We demonstrate our framework through two case studies on image classification by fine-tuning AlexNets to illustrate how analysts can utilize our framework.
Collapse
|
38
|
Chen S, Andrienko N, Andrienko G, Li J, Yuan X. Co-Bridges: Pair-wise Visual Connection and Comparison for Multi-item Data Streams. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1612-1622. [PMID: 33125329 DOI: 10.1109/tvcg.2020.3030411] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In various domains, there are abundant streams or sequences of multi-item data of various kinds, e.g. streams of news and social media texts, sequences of genes and sports events, etc. Comparison is an important and general task in data analysis. For comparing data streams involving multiple items (e.g., words in texts, actors or action types in action sequences, visited places in itineraries, etc.), we propose Co-Bridges, a visual design involving connection and comparison techniques that reveal similarities and differences between two streams. Co-Bridges use river and bridge metaphors, where two sides of a river represent data streams, and bridges connect temporally or sequentially aligned segments of streams. Commonalities and differences between these segments in terms of involvement of various items are shown on the bridges. Interactive query tools support the selection of particular stream subsets for focused exploration. The visualization supports both qualitative (common and distinct items) and quantitative (stream volume, amount of item involvement) comparisons. We further propose Comparison-of-Comparisons, in which two or more Co-Bridges corresponding to different selections are juxtaposed. We test the applicability of the Co-Bridges in different domains, including social media text streams and sports event sequences. We perform an evaluation of the users' capability to understand and use Co-Bridges. The results confirm that Co-Bridges is effective for supporting pair-wise visual comparisons in a wide range of applications.
Collapse
|
39
|
LYi S, Jo J, Seo J. Comparative Layouts Revisited: Design Space, Guidelines, and Future Directions. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1525-1535. [PMID: 33052858 DOI: 10.1109/tvcg.2020.3030419] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a systematic review on three comparative layouts-juxtaposition, superposition, and explicit-encoding-which are information visualization (InfoVis) layouts designed to support comparison tasks. For the last decade, these layouts have served as fundamental idioms in designing many visualization systems. However, we found that the layouts have been used with inconsistent terms and confusion, and the lessons from previous studies are fragmented. The goal of our research is to distill the results from previous studies into a consistent and reusable framework. We review 127 research papers, including 15 papers with quantitative user studies, which employed comparative layouts. We first alleviate the ambiguous boundaries in the design space of comparative layouts by suggesting lucid terminology (e.g., chart-wise and item-wise juxtaposition). We then identify the diverse aspects of comparative layouts, such as the advantages and concerns of using each layout in the real-world scenarios and researchers' approaches to overcome the concerns. Building our knowledge on top of the initial insights gained from the Gleicher et al.'s survey [19], we elaborate on relevant empirical evidence that we distilled from our survey (e.g., the actual effectiveness of the layouts in different study settings) and identify novel facets that the original work did not cover (e.g., the familiarity of the layouts to people). Finally, we show the consistent and contradictory results on the performance of comparative layouts and offer practical implications for using the layouts by suggesting trade-offs and seven actionable guidelines.
Collapse
|
40
|
Pfluger H, Thom D, Schutz A, Bohde D, Ertl T. VeCHArt: Visually Enhanced Comparison of Historic Art Using an Automated Line-Based Synchronization Technique. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3063-3076. [PMID: 30946669 DOI: 10.1109/tvcg.2019.2908166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The analysis of subtle deviations between different versions of historical prints has been a long-standing challenge in art history research. So far, this challenge has required extensive domain knowledge, fine-tuned expert perception, and time-consuming manual labor. In this paper we introduce an explorative visual approach to facilitate fast and accurate support for the task of comparing differences between prints such as engravings and woodcuts. To this end, we have developed a customized algorithm that detects similar stroke-patterns in prints and matches them in order to allow visual alignment and automated deviation highlighting. Our visual analytics system enables art history researchers to quickly detect, document, and categorize qualitative and quantitative discrepancies, and to analyze these discrepancies using comprehensive interactions. To evaluate our approach, we conducted a user study involving both experts on historical prints and laypeople. Using our new interactive technique, our subjects found about 20 percent more differences compared to regular image viewing software as well as "paper-based" comparison. Moreover, the laypeople found the same differences as the experts when they used our system, which was not the case for conventional methods. Informal feedback showed that both laypeople and experts strongly preferred employing our system to working with conventional methods.
Collapse
|
41
|
Liu Z, Zhan SH, Munzner T. Aggregated Dendrograms for Visual Comparison between Many Phylogenetic Trees. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2732-2747. [PMID: 30736000 DOI: 10.1109/tvcg.2019.2898186] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We address the visual comparison of multiple phylogenetic trees that arises in evolutionary biology, specifically between one reference tree and a collection of dozens to hundreds of other trees. We abstract the domain questions of phylogenetic tree comparison as tasks to look for supporting or conflicting evidence for hypotheses that requires inspection of both topological structure and attribute values at different levels of detail in the tree collection. We introduce the new visual encoding idiom of aggregated dendrograms to concisely summarize the topological relationships between interactively chosen focal subtrees according to biologically meaningful criteria, and provide a layout algorithm that automatically adapts to the available screen space. We design and implement the ADView system, which represents trees at multiple levels of detail across multiple views: the entire collection, a subset of trees, an individual tree, specific subtrees of interest, and the individual branch level. We benchmark the algorithms developed for ADView, compare its information density to previous work, and demonstrate its utility for quickly gathering evidence about biological hypotheses through usage scenarios with data from recently published phylogenetic analysis and case studies of expert use with real-world data, drawn from a summative interview study.
Collapse
|
42
|
Exploring Multiple and Coordinated Views for Multilayered Geospatial Data in Virtual Reality. INFORMATION 2020. [DOI: 10.3390/info11090425] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Virtual reality (VR) headsets offer a large and immersive workspace for displaying visualizations with stereoscopic vision, as compared to traditional environments with monitors or printouts. The controllers for these devices further allow direct three-dimensional interaction with the virtual environment. In this paper, we make use of these advantages to implement a novel multiple and coordinated view (MCV) system in the form of a vertical stack, showing tilted layers of geospatial data. In a formal study based on a use-case from urbanism that requires cross-referencing four layers of geospatial urban data, we compared it against more conventional systems similarly implemented in VR: a simpler grid of layers, and one map that allows for switching between layers. Performance and oculometric analyses showed a slight advantage of the two spatial-multiplexing methods (the grid or the stack) over the temporal multiplexing in blitting. Subgrouping the participants based on their preferences, characteristics, and behavior allowed a more nuanced analysis, allowing us to establish links between e.g., saccadic information, experience with video games, and preferred system. In conclusion, we found that none of the three systems are optimal and a choice of different MCV systems should be provided in order to optimally engage users.
Collapse
|
43
|
Sancho-Chavarria L, Beck F, Mata-Montero E. An expert study on hierarchy comparison methods applied to biological taxonomies curation. PeerJ Comput Sci 2020; 6:e277. [PMID: 33816928 PMCID: PMC7924413 DOI: 10.7717/peerj-cs.277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 05/11/2020] [Indexed: 06/12/2023]
Abstract
Comparison of hierarchies aims at identifying differences and similarities between two or more hierarchical structures. In the biological taxonomy domain, comparison is indispensable for the reconciliation of alternative versions of a taxonomic classification. Biological taxonomies are knowledge structures that may include large amounts of nodes (taxa), which are typically maintained manually. We present the results of a user study with taxonomy experts that evaluates four well-known methods for the comparison of two hierarchies, namely, edge drawing, matrix representation, animation and agglomeration. Each of these methods is evaluated with respect to seven typical biological taxonomy curation tasks. To this end, we designed an interactive software environment through which expert taxonomists performed exercises representative of the considered tasks. We evaluated participants' effectiveness and level of satisfaction from both quantitative and qualitative perspectives. Overall quantitative results evidence that participants were less effective with agglomeration whereas they were more satisfied with edge drawing. Qualitative findings reveal a greater preference among participants for the edge drawing method. In addition, from the qualitative analysis, we obtained insights that contribute to explain the differences between the methods and provide directions for future research.
Collapse
Affiliation(s)
| | - Fabian Beck
- paluno, University of Duisburg-Essen, Essen, North Rhine-Westphalia, Germany
| | - Erick Mata-Montero
- School of Computing, Costa Rica Institute of Technology, Cartago, Cartago, Costa Rica
| |
Collapse
|
44
|
Qi J, Bloemen V, Wang S, van Wijk J, van de Wetering H. STBins: Visual Tracking and Comparison of Multiple Data Sequences Using Temporal Binning. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1054-1063. [PMID: 31425095 DOI: 10.1109/tvcg.2019.2934289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
While analyzing multiple data sequences, the following questions typically arise: how does a single sequence change over time, how do multiple sequences compare within a period, and how does such comparison change over time. This paper presents a visual technique named STBins to answer these questions. STBins is designed for visual tracking of individual data sequences and also for comparison of sequences. The latter is done by showing the similarity of sequences within temporal windows. A perception study is conducted to examine the readability of alternative visual designs based on sequence tracking and comparison tasks. Also, two case studies based on real-world datasets are presented in detail to demonstrate usage of our technique.
Collapse
|
45
|
El-Assady M, Kehlbeck R, Collins C, Keim D, Deussen O. Semantic Concept Spaces: Guided Topic Model Refinement using Word-Embedding Projections. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1001-1011. [PMID: 31443000 DOI: 10.1109/tvcg.2019.2934654] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a framework that allows users to incorporate the semantics of their domain knowledge for topic model refinement while remaining model-agnostic. Our approach enables users to (1) understand the semantic space of the model, (2) identify regions of potential conflicts and problems, and (3) readjust the semantic relation of concepts based on their understanding, directly influencing the topic modeling. These tasks are supported by an interactive visual analytics workspace that uses word-embedding projections to define concept regions which can then be refined. The user-refined concepts are independent of a particular document collection and can be transferred to related corpora. All user interactions within the concept space directly affect the semantic relations of the underlying vector space model, which, in turn, change the topic modeling. In addition to direct manipulation, our system guides the users' decision-making process through recommended interactions that point out potential improvements. This targeted refinement aims at minimizing the feedback required for an efficient human-in-the-loop process. We confirm the improvements achieved through our approach in two user studies that show topic model quality improvements through our visual knowledge externalization and learning process.
Collapse
|
46
|
Li G, Zhang Y, Dong Y, Liang J, Zhang J, Wang J, Mcguffin MJ, Yuan X. BarcodeTree: Scalable Comparison of Multiple Hierarchies. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1022-1032. [PMID: 31545731 DOI: 10.1109/tvcg.2019.2934535] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We propose BarcodeTree (BCT), a novel visualization technique for comparing topological structures and node attribute values of multiple trees. BCT can provide an overview of one hundred shallow and stable trees simultaneously, without aggregating individual nodes. Each BCT is shown within a single row using a style similar to a barcode, allowing trees to be stacked vertically with matching nodes aligned horizontally to ease comparison and maintain space efficiency. We design several visual cues and interactive techniques to help users understand the topological structure and compare trees. In an experiment comparing two variants of BCT with icicle plots, the results suggest that BCTs make it easier to visually compare trees by reducing the vertical distance between different trees. We also present two case studies involving a dataset of hundreds of trees to demonstrate BCT's utility.
Collapse
|
47
|
Cashman D, Perer A, Chang R, Strobelt H. Ablate, Variate, and Contemplate: Visual Analytics for Discovering Neural Architectures. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:863-873. [PMID: 31502978 DOI: 10.1109/tvcg.2019.2934261] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The performance of deep learning models is dependent on the precise configuration of many layers and parameters. However, there are currently few systematic guidelines for how to configure a successful model. This means model builders often have to experiment with different configurations by manually programming different architectures (which is tedious and time consuming) or rely on purely automated approaches to generate and train the architectures (which is expensive). In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures. In REMAP, the user explores the large and complex parameter space for neural network architectures using a combination of global inspection and local experimentation. Through a visual overview of a set of models, the user identifies interesting clusters of architectures. Based on their findings, the user can run ablation and variation experiments to identify the effects of adding, removing, or replacing layers in a given architecture and generate new models accordingly. They can also handcraft new models using a simple graphical interface. As a result, a model builder can build deep learning models quickly, efficiently, and without manual programming. We inform the design of REMAP through a design study with four deep learning model builders. Through a use case, we demonstrate that REMAP allows users to discover performant neural network architectures efficiently using visual exploration and user-defined semi-automated searches through the model space.
Collapse
|
48
|
Jardine N, Ondov BD, Elmqvist N, Franconeri S. The Perceptual Proxies of Visual Comparison. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1012-1021. [PMID: 31443016 DOI: 10.1109/tvcg.2019.2934786] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., "biggest delta", "biggest correlation") varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the "biggest mean" and "biggest range" between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a "Mean length" proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a "Hull Area" proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison.
Collapse
|
49
|
Walny J, Frisson C, West M, Kosminsky D, Knudsen S, Carpendale S, Willett W. Data Changes Everything: Challenges and Opportunities in Data Visualization Design Handoff. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:12-22. [PMID: 31478857 DOI: 10.1109/tvcg.2019.2934538] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Complex data visualization design projects often entail collaboration between people with different visualization-related skills. For example, many teams include both designers who create new visualization designs and developers who implement the resulting visualization software. We identify gaps between data characterization tools, visualization design tools, and development platforms that pose challenges for designer-developer teams working to create new data visualizations. While it is common for commercial interaction design tools to support collaboration between designers and developers, creating data visualizations poses several unique challenges that are not supported by current tools. In particular, visualization designers must characterize and build an understanding of the underlying data, then specify layouts, data encodings, and other data-driven parameters that will be robust across many different data values. In larger teams, designers must also clearly communicate these mappings and their dependencies to developers, clients, and other collaborators. We report observations and reflections from five large multidisciplinary visualization design projects and highlight six data-specific visualization challenges for design specification and handoff. These challenges include adapting to changing data, anticipating edge cases in data, understanding technical challenges, articulating data-dependent interactions, communicating data mappings, and preserving the integrity of data mappings across iterations. Based on these observations, we identify opportunities for future tools for prototyping, testing, and communicating data-driven designs, which might contribute to more successful and collaborative data visualization design.
Collapse
|
50
|
Chan GYY, Nonato LG, Chu A, Raghavan P, Aluru V, Silva CT. Motion Browser: Visualizing and Understanding Complex Upper Limb Movement Under Obstetrical Brachial Plexus Injuries. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 26:981-990. [PMID: 31449022 DOI: 10.1109/tvcg.2019.2934280] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The brachial plexus is a complex network of peripheral nerves that enables sensing from and control of the movements of the arms and hand. Nowadays, the coordination between the muscles to generate simple movements is still not well understood, hindering the knowledge of how to best treat patients with this type of peripheral nerve injury. To acquire enough information for medical data analysis, physicians conduct motion analysis assessments with patients to produce a rich dataset of electromyographic signals from multiple muscles recorded with joint movements during real-world tasks. However, tools for the analysis and visualization of the data in a succinct and interpretable manner are currently not available. Without the ability to integrate, compare, and compute multiple data sources in one platform, physicians can only compute simple statistical values to describe patient's behavior vaguely, which limits the possibility to answer clinical questions and generate hypotheses for research. To address this challenge, we have developed MOTION BROWSER, an interactive visual analytics system which provides an efficient framework to extract and compare muscle activity patterns from the patient's limbs and coordinated views to help users analyze muscle signals, motion data, and video information to address different tasks. The system was developed as a result of a collaborative endeavor between computer scientists and orthopedic surgery and rehabilitation physicians. We present case studies showing physicians can utilize the information displayed to understand how individuals coordinate their muscles to initiate appropriate treatment and generate new hypotheses for future research.
Collapse
|