1
|
Zhou H, Lai P, Sun Z, Chen X, Chen Y, Wu H, Wang Y. AdaMotif: Graph Simplification via Adaptive Motif Design. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:688-698. [PMID: 39255161 DOI: 10.1109/tvcg.2024.3456321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information as possible from the subgraphs while simplifying the graph efficiently. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.
Collapse
|
2
|
Jackson J, Ritsos PD, Butcher PWS, Roberts JC. Path-Based Design Model for Constructing and Exploring Alternative Visualisations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1158-1168. [PMID: 39255171 DOI: 10.1109/tvcg.2024.3456323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design methodology fosters the generation of diverse creative concepts, space-filling visualisations, and traditional formats like bar charts, circular plots and pie charts. Through our implementation we showcase the model in action. As an example application, we integrate the output visualisations onto a smartwatch and visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.
Collapse
|
3
|
Laamoumi M, Hendriks T, Chamberland M. A taxonomic guide to diffusion MRI tractography visualization tools. NMR IN BIOMEDICINE 2025; 38:e5267. [PMID: 39375843 PMCID: PMC11631367 DOI: 10.1002/nbm.5267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 07/15/2024] [Accepted: 09/19/2024] [Indexed: 10/09/2024]
Abstract
Visualizing neuroimaging data is a key step in evaluating data quality, interpreting results, and communicating findings. This survey focuses on diffusion MRI tractography, which has been widely used in both research and clinical domains within the neuroimaging community. With an increasing number of tractography tools and software, navigating this landscape poses a challenge, especially for newcomers. A systematic exploration of a diverse range of features is proposed across 27 research tools, delving into their main purpose and examining the presence or absence of prevalent visualization and interactive techniques. The findings are structured within a proposed taxonomy, providing a comprehensive overview. Insights derived from this analysis will help (novice) researchers, clinicians, and developers in identifying knowledge gaps and navigating the landscape of tractography visualization tools.
Collapse
Affiliation(s)
- Miriam Laamoumi
- Department of Mathematics and Computer ScienceEindhoven University of TechnologyEindhovenThe Netherlands
- Department of Biomedical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
| | - Tom Hendriks
- Department of Mathematics and Computer ScienceEindhoven University of TechnologyEindhovenThe Netherlands
| | - Maxime Chamberland
- Department of Mathematics and Computer ScienceEindhoven University of TechnologyEindhovenThe Netherlands
| |
Collapse
|
4
|
Peper JMJ, Kalivas JH. Redefining Spectral Data Analysis with Immersive Analytics: Exploring Domain-Shifted Model Spaces for Optimal Model Selection. APPLIED SPECTROSCOPY 2024:37028241280669. [PMID: 39340333 DOI: 10.1177/00037028241280669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/30/2024]
Abstract
Modern developments in autonomous chemometric machine learning technology strive to relinquish the need for human intervention. However, such algorithms developed and used in chemometric multivariate calibration and classification applications exclude crucial expert insight when difficult and safety-critical analysis situations arise, e.g., spectral-based medical decisions such as noninvasively determining if a biopsy is cancerous. The prediction accuracy and interpolation capabilities of autonomous methods for new samples depend on the quality and scope of their training (calibration) data. Specifically, analysis patterns within target data not captured by the training data will produce undesirable outcomes. Alternatively, using an immersive analytic approach allows insertion of human expert judgment at key machine learning algorithm junctures forming a sensemaking process performed in cooperation with a computer. The capacity of immersive virtual reality (IVR) environments to render human comprehensible three-dimensional space simulating real-world encounters, suggests its suitability as a hybrid immersive human-computer interface for data analysis tasks. Using IVR maximizes human senses to capitalize on our instinctual perception of the physical environment, thereby leveraging our innate ability to recognize patterns and visualize thresholds crucial to reducing erroneous outcomes. In this first use of IVR as an immersive analytic tool for spectral data, we examine an integrated IVR real-time model selection algorithm for a recent model updating method that adapts a model from the original calibration domain to predict samples from shifted target domains. Using near-infrared data, analyte prediction errors from IVR-selected models are reduced compared to errors using an established autonomous model selection approach. Results demonstrate the viability of IVR as a human data analysis interface for spectral data analysis including classification problems.
Collapse
Affiliation(s)
- Jordan M J Peper
- Department of Chemistry, Idaho State University, Pocatello, Idaho, USA
| | - John H Kalivas
- Department of Chemistry, Idaho State University, Pocatello, Idaho, USA
| |
Collapse
|
5
|
Dennig FL, Miller M, Keim DA, El-Assady M. FS/DS: A Theoretical Framework for the Dual Analysis of Feature Space and Data Space. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5165-5182. [PMID: 37342951 DOI: 10.1109/tvcg.2023.3288356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
With the surge of data-driven analysis techniques, there is a rising demand for enhancing the exploration of large high-dimensional data by enabling interactions for the joint analysis of features (i.e., dimensions). Such a dual analysis of the feature space and data space is characterized by three components, 1) a view visualizing feature summaries, 2) a view that visualizes the data records, and 3) a bidirectional linking of both plots triggered by human interaction in one of both visualizations, e.g., Linking & Brushing. Dual analysis approaches span many domains, e.g., medicine, crime analysis, and biology. The proposed solutions encapsulate various techniques, such as feature selection or statistical analysis. However, each approach establishes a new definition of dual analysis. To address this gap, we systematically reviewed published dual analysis methods to investigate and formalize the key elements, such as the techniques used to visualize the feature space and data space, as well as the interaction between both spaces. From the information elicited during our review, we propose a unified theoretical framework for dual analysis, encompassing all existing approaches extending the field. We apply our proposed formalization describing the interactions between each component and relate them to the addressed tasks. Additionally, we categorize the existing approaches using our framework and derive future research directions to advance dual analysis by including state-of-the-art visual analysis techniques to improve data exploration.
Collapse
|
6
|
Xiong W, Yu C, Shi C, Zheng Y, Wang X, Hu Y, Yin H, Li C, Wang C. V4RIN: visual analysis of regional industry network with domain knowledge. Vis Comput Ind Biomed Art 2024; 7:11. [PMID: 38748079 PMCID: PMC11096142 DOI: 10.1186/s42492-024-00164-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 04/29/2024] [Indexed: 05/18/2024] Open
Abstract
The regional industry network (RIN) is a type of financial network derived from industry networks that possess the capability to describe the connections between specific industries within a particular region. For most investors and financial analysts lacking extensive experience, the decision-support information provided by industry networks may be too vague. Conversely, RINs express more detailed and specific industry connections both within and outside the region. As RIN analysis is domain-specific and current financial network analysis tools are designed for generalized analytical tasks and cannot be directly applied to RINs, new visual analysis approaches are needed to enhance information exploration efficiency. In this study, we collaborated with domain experts and proposed V4RIN, an interactive visualization analysis system that integrates predefined domain knowledge and data processing methods to support users in uploading custom data. Through multiple views in the system panel, users can comprehensively explore the structure, geographical distribution, and spatiotemporal variations of the RIN. Two case studies were conducted and a set of expert interviews with five domain experts to validate the usability and reliability of our system.
Collapse
Affiliation(s)
- Wenli Xiong
- School of Computer Science and Technology, East China Normal University, Shanghai, 200062, China
| | - Chenjie Yu
- School of Computer Science and Technology, East China Normal University, Shanghai, 200062, China
| | - Chen Shi
- School of Computer Science and Technology, East China Normal University, Shanghai, 200062, China
| | - Yaxuan Zheng
- School of Computer Science and Technology, East China Normal University, Shanghai, 200062, China
| | - Xiping Wang
- China Fortune Securities Co., Ltd, Shanghai, 200030, China
| | - Yanpeng Hu
- Shanghai Chinafortune Co., Ltd, Shanghai, 200030, China
| | - Hong Yin
- Faculty of Economics and Management, East China Normal University, Shanghai, 200062, China.
| | - Chenhui Li
- School of Computer Science and Technology, East China Normal University, Shanghai, 200062, China
| | - Changbo Wang
- School of Computer Science and Technology, East China Normal University, Shanghai, 200062, China
| |
Collapse
|
7
|
Elmquist E, Enge K, Rind A, Navarra C, Höldrich R, Iber M, Bock A, Ynnerman A, Aigner W, Rönnberg N. Parallel Chords: an audio-visual analytics design for parallel coordinates. PERSONAL AND UBIQUITOUS COMPUTING 2024; 28:657-676. [PMID: 39553444 PMCID: PMC11567997 DOI: 10.1007/s00779-024-01795-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 02/27/2024] [Indexed: 11/19/2024]
Abstract
One of the commonly used visualization techniques for multivariate data is the parallel coordinates plot. It provides users with a visual overview of multivariate data and the possibility to interactively explore it. While pattern recognition is a strength of the human visual system, it is also a strength of the auditory system. Inspired by the integration of the visual and auditory perception in everyday life, we introduce an audio-visual analytics design named Parallel Chords combining both visual and auditory displays. Parallel Chords lets users explore multivariate data using both visualization and sonification through the interaction with the axes of a parallel coordinates plot. To illustrate the potential of the design, we present (1) prototypical data patterns where the sonification helps with the identification of correlations, clusters, and outliers, (2) a usage scenario showing the sonification of data from non-adjacent axes, and (3) a controlled experiment on the sensitivity thresholds of participants when distinguishing the strength of correlations. During this controlled experiment, 35 participants used three different display types, the visualization, the sonification, and the combination of these, to identify the strongest out of three correlations. The results show that all three display types enabled the participants to identify the strongest correlation - with visualization resulting in the best sensitivity. The sonification resulted in sensitivities that were independent from the type of displayed correlation, and the combination resulted in increased enjoyability during usage. Supplementary Information The online version contains supplementary material available at 10.1007/s00779-024-01795-8.
Collapse
Affiliation(s)
| | - Kajetan Enge
- St. Pölten University of Applied Sciences, St. Pölten, Austria
- University of Music and Performing Arts Graz, Graz, Austria
| | - Alexander Rind
- St. Pölten University of Applied Sciences, St. Pölten, Austria
| | | | | | - Michael Iber
- St. Pölten University of Applied Sciences, St. Pölten, Austria
| | | | | | - Wolfgang Aigner
- St. Pölten University of Applied Sciences, St. Pölten, Austria
| | | |
Collapse
|
8
|
Zhao H, Bryant GW, Griffin W, Terrill JE, Chen J. Evaluating Glyph Design for Showing Large-Magnitude-Range Quantum Spins. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1868-1884. [PMID: 37015635 DOI: 10.1109/tvcg.2022.3232591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
We present experimental results to explore a form of bivariate glyphs for representing large-magnitude-range vectors. The glyphs meet two conditions: (1) two visual dimensions are separable; and (2) one of the two visual dimensions uses a categorical representation (e.g., a categorical colormap). We evaluate how much these two conditions determine the bivariate glyphs' effectiveness. The first experiment asks participants to perform three local tasks requiring reading no more than two glyphs. The second experiment scales up the search space in global tasks when participants must look at the entire scene of hundreds of vector glyphs to get an answer. Our results support that the first condition is necessary for local tasks when a few items are compared. But it is not enough for understanding a large amount of data. The second condition is necessary for perceiving global structures of examining very complex datasets. Participants' comments reveal that the categorical features in the bivariate glyphs trigger emergent optimal viewers' behaviors. This work contributes to perceptually accurate glyph representations for revealing patterns from large scientific results. We release source code, quantum physics data, training documents, participants' answers, and statistical analyses for reproducible science at https://osf.io/4xcf5/?view_only=94123139df9c4ac984a1e0df811cd580.
Collapse
|
9
|
Ying L, Shu X, Deng D, Yang Y, Tang T, Yu L, Wu Y. MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:331-341. [PMID: 36179002 DOI: 10.1109/tvcg.2022.3209447] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.
Collapse
|
10
|
Pandey A, Syeda UH, Shah C, Guerra-Gomez JA, Borkin MA. A State-of-the-Art Survey of Tasks for Tree Design and Evaluation With a Curated Task Dataset. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3563-3584. [PMID: 33667165 DOI: 10.1109/tvcg.2021.3064037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In the field of information visualization, the concept of "tasks" is an essential component of theories and methodologies for how a visualization researcher or a practitioner understands what tasks a user needs to perform and how to approach the creation of a new design. In this article, we focus on the collection of tasks for tree visualizations, a common visual encoding in many domains ranging from biology to computer science to geography. In spite of their commonality, no prior efforts exist to collect and abstractly define tree visualization tasks. We present a literature review of tree visualization articles and generate a curated dataset of over 200 tasks. To enable effective task abstraction for trees, we also contribute a novel extension of the Multi-Level Task Typology to include more specificity to support tree-specific tasks as well as a systematic procedure to conduct task abstractions for tree visualizations. All tasks in the dataset were abstracted with the novel typology extension and analyzed to gain a better understanding of the state of tree visualizations. These abstracted tasks can benefit visualization researchers and practitioners as they design evaluation studies or compare their analytical tasks with ones previously studied in the literature to make informed decisions about their design. We also reflect on our novel methodology and advocate more broadly for the creation of task-based knowledge repositories for different types of visualizations. The Supplemental Material, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TVCG.2021.3064037, will be maintained on OSF: https://osf.io/u5ehs/.
Collapse
|
11
|
Khawatmi M, Steux Y, Zourob S, Sailem HZ. ShapoGraphy: A User-Friendly Web Application for Creating Bespoke and Intuitive Visualisation of Biomedical Data. FRONTIERS IN BIOINFORMATICS 2022; 2:788607. [PMID: 36304310 PMCID: PMC9580894 DOI: 10.3389/fbinf.2022.788607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Accepted: 05/23/2022] [Indexed: 12/05/2022] Open
Abstract
Effective visualisation of quantitative microscopy data is crucial for interpreting and discovering new patterns from complex bioimage data. Existing visualisation approaches, such as bar charts, scatter plots and heat maps, do not accommodate the complexity of visual information present in microscopy data. Here we develop ShapoGraphy, a first of its kind method accompanied by an interactive web-based application for creating customisable quantitative pictorial representations to facilitate the understanding and analysis of image datasets (www.shapography.com). ShapoGraphy enables the user to create a structure of interest as a set of shapes. Each shape can encode different variables that are mapped to the shape dimensions, colours, symbols, or outline. We illustrate the utility of ShapoGraphy using various image data, including high dimensional multiplexed data. Our results show that ShapoGraphy allows a better understanding of cellular phenotypes and relationships between variables. In conclusion, ShapoGraphy supports scientific discovery and communication by providing a rich vocabulary to create engaging and intuitive representations of diverse data types.
Collapse
Affiliation(s)
| | | | | | - Heba Z. Sailem
- Institute of Biomedical Engineering, Department of Engineering, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
12
|
Firat EE, Joshi A, Laramee RS, Sousa Santos B, Alford G. VisLitE: Visualization Literacy and Evaluation. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:99-107. [PMID: 35671276 DOI: 10.1109/mcg.2022.3161767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
With the widespread advent of visualization techniques to convey complex data, visualization literacy (VL) is growing in importance. Two noteworthy facets of literacy are user understanding and the discovery of visual patterns with the help of graphical representations. The research literature on VL provides useful guidance and opportunities for further studies in this field. This introduction summarizes and presents research on VL that examines how well users understand basic and advanced data representations. To the best of our knowledge, this is the first tutorial article on interactive VL. We describe evaluation categories of existing relevant research into unique subject groups that facilitate and inform comparisons of literacy literature and provide a starting point for interested readers. In addition, the introduction also provides an overview of the various evaluation techniques used in this field of research and their challenging nature. Our introduction provides researchers with unexplored directions that may lead to future work. This starting point serves as a valuable resource for beginners interested in the topic of VL.
Collapse
|
13
|
Ying L, Tangl T, Luo Y, Shen L, Xie X, Yu L, Wu Y. GlyphCreator: Towards Example-based Automatic Generation of Circular Glyphs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:400-410. [PMID: 34596552 DOI: 10.1109/tvcg.2021.3114877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Circular glyphs are used across disparate fields to represent multidimensional data. However, although these glyphs are extremely effective, creating them is often laborious, even for those with professional design skills. This paper presents GlyphCreator, an interactive tool for the example-based generation of circular glyphs. Given an example circular glyph and multidimensional input data, GlyphCreator promptly generates a list of design candidates, any of which can be edited to satisfy the requirements of a particular representation. To develop GlyphCreator, we first derive a design space of circular glyphs by summarizing relationships between different visual elements. With this design space, we build a circular glyph dataset and develop a deep learning model for glyph parsing. The model can deconstruct a circular glyph bitmap into a series of visual elements. Next, we introduce an interface that helps users bind the input data attributes to visual elements and customize visual styles. We evaluate the parsing model through a quantitative experiment, demonstrate the use of GlyphCreator through two use scenarios, and validate its effectiveness through user interviews.
Collapse
|
14
|
Narechania A, Karduni A, Wesslen R, Wall E. VITALITY: Promoting Serendipitous Discovery of Academic Literature with Transformers & Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:486-496. [PMID: 34587054 DOI: 10.1109/tvcg.2021.3114820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
There are a few prominent practices for conducting reviews of academic literature, including searching for specific keywords on Google Scholar or checking citations from some initial seed paper(s). These approaches serve a critical purpose for academic literature reviews, yet there remain challenges in identifying relevant literature when similar work may utilize different terminology (e.g., mixed-initiative visual analytics papers may not use the same terminology as papers on model-steering, yet the two topics are relevant to one another). In this paper, we introduce a system, VITALITY, intended to complement existing practices. In particular, VITALITY promotes serendipitous discovery of relevant literature using transformer language models, allowing users to find semantically similar papers in a word embedding space given (1) a list of input paper(s) or (2) a working abstract. VITALITY visualizes this document-level embedding space in an interactive 2-D scatterplot using dimension reduction. VITALITY also summarizes meta information about the document corpus or search query, including keywords and co-authors, and allows users to save and export papers for use in a literature review. We present qualitative findings from an evaluation of VITALITY, suggesting it can be a promising complementary technique for conducting academic literature reviews. Furthermore, we contribute data from 38 popular data visualization publication venues in VITALITY, and we provide scrapers for the open-source community to continue to grow the list of supported venues.
Collapse
|
15
|
Brehmer M, Kosara R, Hull C. Generative Design Inspiration for Glyphs with Diatoms. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:389-399. [PMID: 34587035 DOI: 10.1109/tvcg.2021.3114792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We introduce Diatoms, a technique that generates design inspiration for glyphs by sampling from palettes of mark shapes, encoding channels, and glyph scaffold shapes. Diatoms allows for a degree of randomness while respecting constraints imposed by columns in a data table: their data types and domains as well as semantic associations between columns as specified by the designer. We pair this generative design process with two forms of interactive design externalization that enable comparison and critique of the design alternatives. First, we incorporate a familiar small multiples configuration in which every data point is drawn according to a single glyph design, coupled with the ability to page between alternative glyph designs. Second, we propose a small permutables design gallery, in which a single data point is drawn according to each alternative glyph design, coupled with the ability to page between data points. We demonstrate an implementation of our technique as an extension to Tableau featuring three example palettes, and to better understand how Diatoms could fit into existing design workflows, we conducted interviews and chauffeured demos with 12 designers. Finally, we reflect on our process and the designers' reactions, discussing the potential of our technique in the context of visualization authoring systems. Ultimately, our approach to glyph design and comparison can kickstart and inspire visualization design, allowing for the serendipitous discovery of shape and channel combinations that would have otherwise been overlooked.
Collapse
|
16
|
RallyComparator: visual comparison of the multivariate and spatial stroke sequence in table tennis rally. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00772-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
17
|
Rees D, Laramee RS, Brookes P, D'Cruze T, Smith GA, Miah A. AgentVis: Visual Analysis of Agent Behavior With Hierarchical Glyphs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3626-3643. [PMID: 32305921 DOI: 10.1109/tvcg.2020.2985923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Glyphs representing complex behavior provide a useful and common means of visualizing multivariate data. However, due to their complex shape, overlapping, and occlusion of glyphs is a common and prominent limitation. This limits the number of discreet data tuples that can be displayed in a given image. Using a real-world application, glyphs are used to depict agent behavior in a call center. However, many call centers feature thousands of agents. A standard approach representing thousands of agents with glyphs does not scale. To accommodate the visualization incorporating thousands of glyphs we develop clustering of overlapping glyphs into a single parent glyph. This hierarchical glyph represents the mean value of all child agent glyphs, removing overlap and reduTcing visual clutter. Multi-variate clustering techniques are explored and developed in collaboration with domain experts in the call center industry. We implement dynamic control of glyph clusters according to zoom level and customized distance metrics, to utilize image space with reduced overplotting and cluttering. We demonstrate our technique with examples and a usage scenario using real-world call-center data to visualize thousands of call center agents, revealing insight into their behavior and reporting feedback from expert call-center analysts.
Collapse
|
18
|
From a Low-Cost Air Quality Sensor Network to Decision Support Services: Steps towards Data Calibration and Service Development. SENSORS 2021; 21:s21093190. [PMID: 34062961 PMCID: PMC8124547 DOI: 10.3390/s21093190] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 04/27/2021] [Accepted: 04/30/2021] [Indexed: 11/21/2022]
Abstract
Air pollution is a widespread problem due to its impact on both humans and the environment. Providing decision makers with artificial intelligence based solutions requires to monitor the ambient air quality accurately and in a timely manner, as AI models highly depend on the underlying data used to justify the predictions. Unfortunately, in urban contexts, the hyper-locality of air quality, varying from street to street, makes it difficult to monitor using high-end sensors, as the cost of the amount of sensors needed for such local measurements is too high. In addition, development of pollution dispersion models is challenging. The deployment of a low-cost sensor network allows a more dense cover of a region but at the cost of noisier sensing. This paper describes the development and deployment of a low-cost sensor network, discussing its challenges and applications, and is highly motivated by talks with the local municipality and the exploration of new technologies to improve air quality related services. However, before using data from these sources, calibration procedures are needed to ensure that the quality of the data is at a good level. We describe our steps towards developing calibration models and how they benefit the applications identified as important in the talks with the municipality.
Collapse
|
19
|
Horak T, Berger P, Schumann H, Dachselt R, Tominski C. Responsive Matrix Cells: A Focus+Context Approach for Exploring and Editing Multivariate Graphs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1644-1654. [PMID: 33074814 DOI: 10.1109/tvcg.2020.3030371] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Matrix visualizations are a useful tool to provide a general overview of a graph's structure. For multivariate graphs, a remaining challenge is to cope with the attributes that are associated with nodes and edges. Addressing this challenge, we propose responsive matrix cells as a focus+context approach for embedding additional interactive views into a matrix. Responsive matrix cells are local zoomable regions of interest that provide auxiliary data exploration and editing facilities for multivariate graphs. They behave responsively by adapting their visual contents to the cell location, the available display space, and the user task. Responsive matrix cells enable users to reveal details about the graph, compare node and edge attributes, and edit data values directly in a matrix without resorting to external views or tools. We report the general design considerations for responsive matrix cells covering the visual and interactive means necessary to support a seamless data exploration and editing. Responsive matrix cells have been implemented in a web-based prototype based on which we demonstrate the utility of our approach. We describe a walk-through for the use case of analyzing a graph of soccer players and report on insights from a preliminary user feedback session.
Collapse
|
20
|
Ivson P, Moreira A, Queiroz F, Santos W, Celes W. A Systematic Review of Visualization in Building Information Modeling. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3109-3127. [PMID: 30932840 DOI: 10.1109/tvcg.2019.2907583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Building Information Modeling (BIM) employs data-rich 3D CAD models for large-scale facility design, construction, and operation. These complex datasets contain a large amount and variety of information, ranging from design specifications to real-time sensor data. They are used by architects and engineers for various analysis and simulations throughout a facility's life cycle. Many techniques from different visualization fields could be used to analyze these data. However, the BIM domain still remains largely unexplored by the visualization community. The goal of this article is to encourage visualization researchers to increase their involvement with BIM. To this end, we present the results of a systematic review of visualization in current BIM practice. We use a novel taxonomy to identify main application areas and analyze commonly employed techniques. From this domain characterization, we highlight future research opportunities brought forth by the unique features of BIM. For instance, exploring the synergies between scientific and information visualization to integrate spatial and non-spatial data. We hope this article raises awareness to interesting new challenges the BIM domain brings to the visualization community.
Collapse
|
21
|
Abstract
The Treemap is one of the most relevant information visualization (InfoVis) techniques to support the analysis of large hierarchical data structures or data clusters. Despite that, Treemap still presents some challenges for data representation, such as the few options for visual data mappings and the inability to represent zero and negative values. Additionally, visualizing high dimensional data requires many hierarchies, which can impair data visualization. Thus, this paper proposes to add layered glyphs to Treemap’s items to mitigate these issues. Layered glyphs are composed of N partially visible layers, and each layer maps one data dimension to a visual variable. Since the area of the upper layers is always smaller than the bottom ones, the layers can be stacked to compose a multidimensional glyph. To validate this proposal, we conducted a user study to compare three scenarios of visual data mappings for Treemaps: only Glyphs (G), Glyphs and Hierarchy (GH), and only Hierarchy (H). Thirty-six volunteers with a background in InfoVis techniques, organized into three groups of twelve (one group per scenario), performed 8 InfoVis tasks using only one of the proposed scenarios. The results point that scenario GH presented the best accuracy while having a task-solving time similar to scenario H, which suggests that representing more data in Treemaps with layered glyphs enriched the Treemap visualization capabilities without impairing the data readability.
Collapse
|
22
|
Johnson S, Samsel F, Abram G, Olson D, Solis AJ, Herman B, Wolfram PJ, Lenglet C, Keefe DF. Artifact-Based Rendering: Harnessing Natural and Traditional Visual Media for More Expressive and Engaging 3D Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:492-502. [PMID: 31403430 DOI: 10.1109/tvcg.2019.2934260] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We introduce Artifact-Based Rendering (ABR), a framework of tools, algorithms, and processes that makes it possible to produce real, data-driven 3D scientific visualizations with a visual language derived entirely from colors, lines, textures, and forms created using traditional physical media or found in nature. A theory and process for ABR is presented to address three current needs: (i) designing better visualizations by making it possible for non-programmers to rapidly design and critique many alternative data-to-visual mappings; (ii) expanding the visual vocabulary used in scientific visualizations to depict increasingly complex multivariate data; (iii) bringing a more engaging, natural, and human-relatable handcrafted aesthetic to data visualization. New tools and algorithms to support ABR include front-end applets for constructing artifact-based colormaps, optimizing 3D scanned meshes for use in data visualization, and synthesizing textures from artifacts. These are complemented by an interactive rendering engine with custom algorithms and interfaces that demonstrate multiple new visual styles for depicting point, line, surface, and volume data. A within-the-research-team design study provides early evidence of the shift in visualization design processes that ABR is believed to enable when compared to traditional scientific visualization systems. Qualitative user feedback on applications to climate science and brain imaging support the utility of ABR for scientific discovery and public communication.
Collapse
|
23
|
Multivariate Maps—A Glyph-Placement Algorithm to Support Multivariate Geospatial Visualization. INFORMATION 2019. [DOI: 10.3390/info10100302] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Maps are one of the most conventional types of visualization used when conveying information to both inexperienced users and advanced analysts. However, the multivariate representation of data on maps is still considered an unsolved problem. We present a multivariate map that uses geo-space to guide the position of multivariate glyphs and enable users to interact with the map and glyphs, conveying meaningful data at different levels of detail. We develop an algorithm pipeline for this process and demonstrate how the user can adjust the level-of-detail of the resulting imagery. The algorithm features a unique combination of guided glyph placement, level-of-detail, dynamic zooming, and smooth transitions. We present a selection of user options to facilitate the exploration process and provide case studies to support how the application can be used. We also compare our placement algorithm with previous geo-spatial glyph placement algorithms. The result is a novel glyph placement solution to support multi-variate maps.
Collapse
|
24
|
Brehmer M, Lee B, Isenberg P, Choe EK. Visualizing Ranges over Time on Mobile Phones: A Task-Based Crowdsourced Evaluation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:619-629. [PMID: 30137001 DOI: 10.1109/tvcg.2018.2865234] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In the first crowdsourced visualization experiment conducted exclusively on mobile phones, we compare approaches to visualizing ranges over time on small displays. People routinely consume such data via a mobile phone, from temperatures in weather forecasting apps to sleep and blood pressure readings in personal health apps. However, we lack guidance on how to effectively visualize ranges on small displays in the context of different value retrieval and comparison tasks, or with respect to different data characteristics such as periodicity, seasonality, or the cardinality of ranges. Central to our experiment is a comparison between two ways to lay out ranges: a more conventional linear layout strikes a balance between quantitative and chronological scale resolution, while a less conventional radial layout emphasizes the cyclicality of time and may prioritize discrimination between values at its periphery. With results from 87 crowd workers, we found that while participants completed tasks more quickly with linear layouts than with radial ones, there were few differences in terms of error rate between layout conditions. We also found that participants performed similarly with both layouts in tasks that involved comparing superimposed observed and average ranges.
Collapse
|
25
|
Blascheck T, Besancon L, Bezerianos A, Lee B, Isenberg P. Glanceable Visualization: Studies of Data Comparison Performance on Smartwatches. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:630-640. [PMID: 30138911 DOI: 10.1109/tvcg.2018.2865142] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present the results of two perception studies to assess how quickly people can perform a simple data comparison task for small-scale visualizations on a smartwatch. The main goal of these studies is to extend our understanding of design constraints for smartwatch visualizations. Previous work has shown that a vast majority of smartwatch interactions last under 5 s. It is still unknown what people can actually perceive from visualizations during such short glances, in particular with such a limited display space of smartwatches. To shed light on this question, we conducted two perception studies that assessed the lower bounds of task time for a simple data comparison task. We tested three chart types common on smartwatches: bar charts, donut charts, and radial bar charts with three different data sizes: 7, 12, and 24 data values. In our first study, we controlled the differences of the two target bars to be compared, while the second study varied the difference randomly. For both studies, we found that participants performed the task on average in <300 ms for the bar chart, <220 ms for the donut chart, and in <1780 ms for the radial bar chart. Thresholds in the second study per chart type were on average 1.14-1.35× higher than in the first study. Our results show that bar and donut charts should be preferred on smartwatch displays when quick data comparisons are necessary.
Collapse
|