1
|
Zhao Y, Zhang Y, Zhang Y, Zhao X, Wang J, Shao Z, Turkay C, Chen S. LEVA: Using Large Language Models to Enhance Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1830-1847. [PMID: 38437130 DOI: 10.1109/tvcg.2024.3368060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.
Collapse
|
2
|
Wang AZ, Borland D, Gotz D. Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:776-786. [PMID: 39255136 DOI: 10.1109/tvcg.2024.3456369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions and challenges for incorporating counterfactual guidance to better support exploratory visual analytics.
Collapse
|
3
|
Dhanoa V, Hinterreiter A, Fediuk V, Elmqvist N, Groller E, Streit M. D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:721-731. [PMID: 39259628 DOI: 10.1109/tvcg.2024.3456347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions available. Authoring such an onboarding experience is time-consuming and requires significant knowledge and little guidance on how best to complete this task. Depending on their levels of expertise, end users being onboarded to a new dashboard can be either confused and overwhelmed or disinterested and disengaged. We propose interactive dashboard tours (D-Tours) as semi-automated onboarding experiences that preserve the agency of users with various levels of expertise to keep them interested and engaged. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path through onboarding. We have implemented the concept in a tool called D-TOUR Prototype, which allows authors to craft custom interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (e.g., video, audio, and highlighting) or new narratives to produce an onboarding experience tailored to an individual user. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. Our evaluation shows that authors found the automation in the D-Tour Prototype helpful and time-saving, and users found the created tours engaging and intuitive. This paper and all supplemental materials are available at https://osf.io/6fbjp/.
Collapse
|
4
|
McNutt A, Stone MC, Heer J. Mixing Linters with GUIs: A Color Palette Design Probe. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:327-337. [PMID: 39259629 DOI: 10.1109/tvcg.2024.3456317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious-even annoying-advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contexts thereby addressing many of their core issues. We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice-all laying the groundwork for more effective visualization linters in any context.
Collapse
|
5
|
Miksch S, Di Ciccio C, Soffer P, Weber B, Rhyne TM. Visual Analytics Meets Process Mining: Challenges and Opportunities. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2024; 44:132-141. [PMID: 40030841 DOI: 10.1109/mcg.2024.3456916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Visual analytics (VA) integrates the outstanding capabilities of humans in terms of visual information exploration with the enormous processing power of computers to form a powerful knowledge discovery environment. In other words, VA is the science of analytical reasoning facilitated by interactive interfaces, capturing the information discovery process while keeping humans in the loop. Process mining (PM) is a data-driven and process centric approach that aims to extract information and knowledge from event logs to discover, monitor, and improve processes in various application domains. The combination of interactive visual data analysis and exploration with PM algorithms can make complex information structures more comprehensible and facilitate new insights. Yet, this combination remains largely unexplored. In this article, we illustrate the concepts of VA and PM, how their combination can support the extraction of more insights from complex event data, and elaborate on the challenges and opportunities for analyzing process data with VA methods and enhancing VA methods using PM techniques.
Collapse
|
6
|
Hografer M, Schulz HJ. Tailorable Sampling for Progressive Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4809-4824. [PMID: 37204960 DOI: 10.1109/tvcg.2023.3278084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Progressive visual analytics (PVA) allows analysts to maintain their flow during otherwise long-running computations by producing early, incomplete results that refine over time, for example, by running the computation over smaller partitions of the data. These partitions are created using sampling, whose goal it isto draw samples of the dataset such that the progressive visualization becomes as useful as possible as soon as possible. What makes the visualization useful depends on the analysis task and, accordingly, some task-specific sampling methods have been proposed for PVA to address this need. However, as analysts see more and more of their data during the progression, the analysis task at hand often changes, which means that analysts need to restart the computation to switch the sampling method, causing them to lose their analysis flow. This poses a clear limitation to the proposed benefits of PVA. Hence, we propose a pipeline for PVA-sampling that allows tailoring the data partitioning to analysis scenarios by switching out modules in a way that does not require restarting the analysis. To that end, we characterize the problem of PVA-sampling, formalize the pipeline in terms of data structures, discuss on-the-fly tailoring, and present additional examples demonstrating its usefulness.
Collapse
|
7
|
Angelini M, Blasilli G, Lenti S, Santucci G. A Visual Analytics Conceptual Framework for Explorable and Steerable Partial Dependence Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4497-4513. [PMID: 37027262 DOI: 10.1109/tvcg.2023.3263739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Machine learning techniques are a driving force for research in various fields, from credit card fraud detection to stock analysis. Recently, a growing interest in increasing human involvement has emerged, with the primary goal of improving the interpretability of machine learning models. Among different techniques, Partial Dependence Plots (PDP) represent one of the main model-agnostic approaches for interpreting how the features influence the prediction of a machine learning model. However, its limitations (i.e., visual interpretation, aggregation of heterogeneous effects, inaccuracy, and computability) could complicate or misdirect the analysis. Moreover, the resulting combinatorial space can be challenging to explore both computationally and cognitively when analyzing the effects of more features at the same time. This article proposes a conceptual framework that enables effective analysis workflows, mitigating state-of-the-art limitations. The proposed framework allows for exploring and refining computed partial dependences, observing incrementally accurate results, and steering the computation of new partial dependences on user-selected subspaces of the combinatorial and intractable space. With this approach, the user can save both computational and cognitive costs, in contrast with the standard monolithic approach that computes all the possible combinations of features on all their domains in batch. The framework is the result of a careful design process involving experts' knowledge during its validation and informed the development of a prototype, W4SP1, that demonstrates its applicability traversing its different paths. A case study shows the advantages of the proposed approach.
Collapse
|
8
|
Yang H, Li J, Chen S. TopicRefiner: Coherence-Guided Steerable LDA for Visual Topic Enhancement. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4542-4557. [PMID: 37053067 DOI: 10.1109/tvcg.2023.3266890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article presents a new Human-steerable Topic Modeling (HSTM) technique. Unlike existing techniques commonly relying on matrix decomposition-based topic models, we extend LDA as the fundamental component for extracting topics. LDA's high popularity and technical characteristics, such as better topic quality and no need to cherry-pick terms to construct the document-term matrix, ensure better applicability. Our research revolves around two inherent limitations of LDA. First, the principle of LDA is complex. Its calculation process is stochastic and difficult to control. We thus give a weighting method to incorporate users' refinements into the Gibbs sampling to control LDA. Second, LDA often runs on a corpus with massive terms and documents, forming a vast search space for users to find semantically relevant or irrelevant objects. We thus design a visual editing framework based on the coherence metric, proven to be the most consistent with human perception in assessing topic quality, to guide users' interactive refinements. Cases on two open real-world datasets, participants' performance in a user study, and quantitative experiment results demonstrate the usability and effectiveness of the proposed technique.
Collapse
|
9
|
Li J, Lai C, Wang Y, Luo A, Yuan X. SpectrumVA: Visual Analysis of Astronomical Spectra for Facilitating Classification Inspection. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5386-5403. [PMID: 37440386 DOI: 10.1109/tvcg.2023.3294958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/15/2023]
Abstract
In astronomical spectral analysis, class recognition is essential and fundamental for subsequent scientific research. The experts often perform the visual inspection after automatic classification to deal with low-quality spectra to improve accuracy. However, given the enormous spectral volume and inadequacy of the current inspection practice, such inspection is tedious and time-consuming. This article presents a visual analytics system named SpectrumVA to promote the efficiency of visual inspection while guaranteeing accuracy. We abstract inspection as a visual parameter space analysis process, using redshifts and spectral lines as parameters. Different navigation strategies are employed in the "selection-inspection-promotion" workflow. At the selection stage, we help the experts identify a spectrum of interest through spectral representations and auxiliary information. Several possible redshifts and corresponding important spectral lines are also recommended through a global-to-local strategy to provide an appropriate entry point for the inspection. The inspection stage adopts a variety of instant visual feedback to help the experts adjust the redshift and select spectral lines in an informed trial-and-error manner. Similar spectra to the inspected one rather than different ones are visualized at the promotion stage, making the inspection process more fluent. We demonstrate the effectiveness of SpectrumVA through a quantitative algorithmic assessment, a case study, interviews with domain experts, and a user study.
Collapse
|
10
|
Piccolotto N, Bogl M, Muehlmann C, Nordhausen K, Filzmoser P, Schmidt J, Miksch S. Data Type Agnostic Visual Sensitivity Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:1-11. [PMID: 37922175 DOI: 10.1109/tvcg.2023.3327203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2023]
Abstract
Modern science and industry rely on computational models for simulation, prediction, and data analysis. Spatial blind source separation (SBSS) is a model used to analyze spatial data. Designed explicitly for spatial data analysis, it is superior to popular non-spatial methods, like PCA. However, a challenge to its practical use is setting two complex tuning parameters, which requires parameter space analysis. In this paper, we focus on sensitivity analysis (SA). SBSS parameters and outputs are spatial data, which makes SA difficult as few SA approaches in the literature assume such complex data on both sides of the model. Based on the requirements in our design study with statistics experts, we developed a visual analytics prototype for data type agnostic visual sensitivity analysis that fits SBSS and other contexts. The main advantage of our approach is that it requires only dissimilarity measures for parameter settings and outputs (Fig. 1). We evaluated the prototype heuristically with visualization experts and through interviews with two SBSS experts. In addition, we show the transferability of our approach by applying it to microclimate simulations. Study participants could confirm suspected and known parameter-output relations, find surprising associations, and identify parameter subspaces to examine in the future. During our design study and evaluation, we identified challenging future research opportunities.
Collapse
|
11
|
Linhares CDG, Lima DM, Ponciano JR, Olivatto MM, Gutierrez MA, Poco J, Traina C, Traina AJM. ClinicalPath: A Visualization Tool to Improve the Evaluation of Electronic Health Records in Clinical Decision-Making. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4031-4046. [PMID: 35588413 DOI: 10.1109/tvcg.2022.3175626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Physicians work at a very tight schedule and need decision-making support tools to help on improving and doing their work in a timely and dependable manner. Examining piles of sheets with test results and using systems with little visualization support to provide diagnostics is daunting, but that is still the usual way for the physicians' daily procedure, especially in developing countries. Electronic Health Records systems have been designed to keep the patients' history and reduce the time spent analyzing the patient's data. However, better tools to support decision-making are still needed. In this article, we propose ClinicalPath, a visualization tool for users to track a patient's clinical path through a series of tests and data, which can aid in treatments and diagnoses. Our proposal is focused on patient's data analysis, presenting the test results and clinical history longitudinally. Both the visualization design and the system functionality were developed in close collaboration with experts in the medical domain to ensure a right fit of the technical solutions and the real needs of the professionals. We validated the proposed visualization based on case studies and user assessments through tasks based on the physician's daily activities. Our results show that our proposed system improves the physicians' experience in decision-making tasks, made with more confidence and better usage of the physicians' time, allowing them to take other needed care for the patients.
Collapse
|
12
|
Wu A, Deng D, Chen M, Liu S, Keim D, Maciejewski R, Miksch S, Strobelt H, Viegas F, Wattenberg M, Rhyne TM. Grand Challenges in Visual Analytics Applications. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2023; 43:83-90. [PMID: 37713213 DOI: 10.1109/mcg.2023.3284620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
In the past two decades, research in visual analytics (VA) applications has made tremendous progress, not just in terms of scientific contributions, but also in real-world impact across wide-ranging domains including bioinformatics, urban analytics, and explainable AI. Despite these success stories, questions on the rigor and value of VA application research have emerged as a grand challenge. This article outlines a research and development agenda for making VA application research more rigorous and impactful. We first analyze the characteristics of VA application research and explain how they cause the rigor and value problem. Next, we propose a research ecosystem for improving scientific value, and rigor and outline an agenda with 12 open challenges spanning four areas, including foundation, methodology, application, and community. We encourage discussions, debates, and innovative efforts toward more rigorous and impactful VA research.
Collapse
|
13
|
Piccolotto N, Bögl M, Miksch S. Visual Parameter Space Exploration in Time and Space. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2023; 42:e14785. [PMID: 38505647 PMCID: PMC10947302 DOI: 10.1111/cgf.14785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Computational models, such as simulations, are central to a wide range of fields in science and industry. Those models take input parameters and produce some output. To fully exploit their utility, relations between parameters and outputs must be understood. These include, for example, which parameter setting produces the best result (optimization) or which ranges of parameter settings produce a wide variety of results (sensitivity). Such tasks are often difficult to achieve for various reasons, for example, the size of the parameter space, and supported with visual analytics. In this paper, we survey visual parameter space exploration (VPSE) systems involving spatial and temporal data. We focus on interactive visualizations and user interfaces. Through thematic analysis of the surveyed papers, we identify common workflow steps and approaches to support them. We also identify topics for future work that will help enable VPSE on a greater variety of computational models.
Collapse
Affiliation(s)
- Nikolaus Piccolotto
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| | - Markus Bögl
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| | - Silvia Miksch
- TU WienInstitute of Visual Computing and Human‐Centered TechnologyWienAustria
| |
Collapse
|
14
|
Kouril D, Strnad O, Mindek P, Halladjian S, Isenberg T, Groller ME, Viola I. Molecumentary: Adaptable Narrated Documentaries Using Molecular Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1733-1747. [PMID: 34822330 DOI: 10.1109/tvcg.2021.3130670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present a method for producing documentary-style content using real-time scientific visualization. We introduce molecumentaries, i.e., molecular documentaries featuring structural models from molecular biology, created through adaptable methods instead of the rigid traditional production pipeline. Our work is motivated by the rapid evolution of scientific visualization and it potential in science dissemination. Without some form of explanation or guidance, however, novices and lay-persons often find it difficult to gain insights from the visualization itself. We integrate such knowledge using the verbal channel and provide it along an engaging visual presentation. To realize the synthesis of a molecumentary, we provide technical solutions along two major production steps: (1) preparing a story structure and (2) turning the story into a concrete narrative. In the first step, we compile information about the model from heterogeneous sources into a story graph. We combine local knowledge with external sources to complete the story graph and enrich the final result. In the second step, we synthesize a narrative, i.e., story elements presented in sequence, using the story graph. We then traverse the story graph and generate a virtual tour, using automated camera and visualization transitions. We turn texts written by domain experts into verbal representations using text-to-speech functionality and provide them as a commentary. Using the described framework, we synthesize fly-throughs with descriptions: automatic ones that mimic a manually authored documentary or semi-automatic ones which guide the documentary narrative solely through curated textual input.
Collapse
|
15
|
Meuschke M, Niemann U, Behrendt B, Gutberlet M, Preim B, Lawonn K. GUCCI - Guided Cardiac Cohort Investigation of Blood Flow Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1876-1892. [PMID: 34882556 DOI: 10.1109/tvcg.2021.3134083] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
We present the framework GUCCI (Guided Cardiac Cohort Investigation), which provides a guided visual analytics workflow to analyze cohort-based measured blood flow data in the aorta. In the past, many specialized techniques have been developed for the visual exploration of such data sets for a better understanding of the influence of morphological and hemodynamic conditions on cardiovascular diseases. However, there is a lack of dedicated techniques that allow visual comparison of multiple data sets and defined cohorts, which is essential to characterize pathologies. GUCCI offers visual analytics techniques and novel visualization methods to guide the user through the comparison of predefined cohorts, such as healthy volunteers and patients with a pathologically altered aorta. The combination of overview and glyph-based depictions together with statistical cohort-specific information allows investigating differences and similarities of the time-dependent data. Our framework was evaluated in a qualitative user study with three radiologists specialized in cardiac imaging and two experts in medical blood flow visualization. They were able to discover cohort-specific characteristics, which supports the derivation of standard values as well as the assessment of pathology-related severity and the need for treatment.
Collapse
|
16
|
Arleo A, Tsigkanos C, Leite RA, Dustdar S, Miksch S, Sorger J. Visual Exploration of Financial Data with Incremental Domain Knowledge. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2023; 42:101-116. [PMID: 38504907 PMCID: PMC10946466 DOI: 10.1111/cgf.14723] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Modelling the dynamics of a growing financial environment is a complex task that requires domain knowledge, expertise and access to heterogeneous information types. Such information can stem from several sources at different scales, complicating the task of forming a holistic impression of the financial landscape, especially in terms of the economical relationships between firms. Bringing this scattered information into a common context is, therefore, an essential step in the process of obtaining meaningful insights about the state of an economy. In this paper, we present Sabrina 2.0, a Visual Analytics (VA) approach for exploring financial data across different scales, from individual firms up to nation-wide aggregate data. Our solution is coupled with a pipeline for the generation of firm-to-firm financial transaction networks, fusing information about individual firms with sector-to-sector transaction data and domain knowledge on macroscopic aspects of the economy. Each network can be created to have multiple instances to compare different scenarios. We collaborated with experts from finance and economy during the development of our VA solution, and evaluated our approach with seven domain experts across industry and academia through a qualitative insight-based evaluation. The analysis shows how Sabrina 2.0 enables the generation of insights, and how the incorporation of transaction models assists users in their exploration of a national economy.
Collapse
Affiliation(s)
- Alessio Arleo
- TU WienViennaAustria
- Centre for Visual Analytics Science and Technology (CVAST)ViennaAustria
| | | | - Roger A. Leite
- TU WienViennaAustria
- Centre for Visual Analytics Science and Technology (CVAST)ViennaAustria
| | - Schahram Dustdar
- TU WienViennaAustria
- Distributed Systems Group (DSG)ViennaAustria
| | - Silvia Miksch
- TU WienViennaAustria
- Centre for Visual Analytics Science and Technology (CVAST)ViennaAustria
| | | |
Collapse
|
17
|
Sperrle F, Ceneda D, El-Assady M. Lotse: A Practical Framework for Guidance in Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1124-1134. [PMID: 36215348 DOI: 10.1109/tvcg.2022.3209393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Co-adaptive guidance aims to enable efficient human-machine collaboration in visual analytics, as proposed by multiple theoretical frameworks. This paper bridges the gap between such conceptual frameworks and practical implementation by introducing an accessible model of guidance and an accompanying guidance library, mapping theory into practice. We contribute a model of system-provided guidance based on design templates and derived strategies. We instantiate the model in a library called Lotse that allows specifying guidance strategies in definition files and generates running code from them. Lotse is the first guidance library using such an approach. It supports the creation of reusable guidance strategies to retrofit existing applications with guidance and fosters the creation of general guidance strategy patterns. We demonstrate its effectiveness through first-use case studies with VA researchers of varying guidance design expertise and find that they are able to effectively and quickly implement guidance with Lotse. Further, we analyze our framework's cognitive dimensions to evaluate its expressiveness and outline a summary of open research questions for aligning guidance practice with its intricate theory.
Collapse
|
18
|
Li J, Zhou CQ. Incorporation of Human Knowledge into Data Embeddings to Improve Pattern Significance and Interpretability. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:723-733. [PMID: 36155441 DOI: 10.1109/tvcg.2022.3209382] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Embedding is a common technique for analyzing multi-dimensional data. However, the embedding projection cannot always form significant and interpretable visual structures that foreshadow underlying data patterns. We propose an approach that incorporates human knowledge into data embeddings to improve pattern significance and interpretability. The core idea is (1) externalizing tacit human knowledge as explicit sample labels and (2) adding a classification loss in the embedding network to encode samples' classes. The approach pulls samples of the same class with similar data features closer in the projection, leading to more compact (significant) and class-consistent (interpretable) visual structures. We give an embedding network with a customized classification loss to implement the idea and integrate the network into a visualization system to form a workflow that supports flexible class creation and pattern exploration. Patterns found on open datasets in case studies, subjects' performance in a user study, and quantitative experiment results illustrate the general usability and effectiveness of the approach.
Collapse
|
19
|
Ha S, Monadjemi S, Garnett R, Ottley A. A Unified Comparison of User Modeling Techniques for Predicting Data Interaction and Detecting Exploration Bias. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:483-492. [PMID: 36155457 DOI: 10.1109/tvcg.2022.3209476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. However, the community lacks a rigorous evaluation and comparison of these existing techniques. As a result, there is limited guidance on which method to use and when. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.
Collapse
|
20
|
Zhou Z, Wang W, Guo M, Wang Y, Gotz D. A Design Space for Surfacing Content Recommendations in Visual Analytic Platforms. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:84-94. [PMID: 36194706 DOI: 10.1109/tvcg.2022.3209445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recommendation algorithms have been leveraged in various ways within visualization systems to assist users as they perform of a range of information tasks. One common focus for these techniques has been the recommendation of content, rather than visual form, as a means to assist users in the identification of information that is relevant to their task context. A wide variety of techniques have been proposed to address this general problem, with a range of design choices in how these solutions surface relevant information to users. This paper reviews the state-of-the-art in how visualization systems surface recommended content to users during users' visual analysis; introduces a four-dimensional design space for visual content recommendation based on a characterization of prior work; and discusses key observations regarding common patterns and future research opportunities.
Collapse
|
21
|
Li Y, Qi Y, Shi Y, Chen Q, Cao N, Chen S. Diverse Interaction Recommendation for Public Users Exploring Multi-view Visualization using Deep Learning. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:95-105. [PMID: 36155443 DOI: 10.1109/tvcg.2022.3209461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Interaction is an important channel to offer users insights in interactive visualization systems. However, which interaction to operate and which part of data to explore are hard questions for public users facing a multi-view visualization for the first time. Making these decisions largely relies on professional experience and analytic abilities, which is a huge challenge for non-professionals. To solve the problem, we propose a method aiming to provide diverse, insightful, and real-time interaction recommendations for novice users. Building on the Long-Short Term Memory Model (LSTM) structure, our model captures users' interactions and visual states and encodes them in numerical vectors to make further recommendations. Through an illustrative example of a visualization system about Chinese poets in the museum scenario, the model is proven to be workable in systems with multi-views and multiple interaction types. A further user study demonstrates the method's capability to help public users conduct more insightful and diverse interactive explorations and gain more accurate data insights.
Collapse
|
22
|
Ceneda D, Arleo A, Gschwandtner T, Miksch S. Show Me Your Face: Towards an Automated Method to Provide Timely Guidance in Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4570-4581. [PMID: 34232881 DOI: 10.1109/tvcg.2021.3094870] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Providing guidance during a Visual Analytics session can support analysts in pursuing their goals more efficiently. However, the effectiveness of guidance depends on many factors: Determining the right timing to provide it is one of them. Although in complex analysis scenarios choosing the right timing could make the difference between a dependable and a superfluous guidance, an analysis of the literature suggests that this problem did not receive enough attention. In this paper, we describe a methodology to determine moments in which guidance is needed. Our assumption is that the need of guidance would influence the user state-of-mind, as in distress situations during the analytical process, and we hypothesize that such moments could be identified by analyzing the user's facial expressions. We propose a framework composed by a facial recognition software and a machine learning model trained to detect when to provide guidance according to changes of the user facial expressions. We trained the model by interviewing eight analysts during their work and ranked multiple facial features based on their relative importance in determining the need of guidance. Finally, we show that by applying only minor modifications to its architecture, our prototype was able to detect a need of guidance on the fly and made our methodology well suited also for real-time analysis sessions. The results of our evaluations show that our methodology is indeed effective in determining when a need of guidance is present, which constitutes a prerequisite to providing timely and effective guidance in VA.
Collapse
|
23
|
Arleo A, Didimo W, Liotta G, Miksch S, Montecchiani F. Influence Maximization With Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3428-3440. [PMID: 35830402 DOI: 10.1109/tvcg.2022.3190623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In social networks, individuals' decisions are strongly influenced by recommendations from their friends, acquaintances, and favorite renowned personalities. The popularity of online social networking platforms makes them the prime venues to advertise products and promote opinions. The Influence Maximization (IM) problem entails selecting a seed set of users that maximizes the influence spread, i.e., the expected number of users positively influenced by a stochastic diffusion process triggered by the seeds. Engineering and analyzing IM algorithms remains a difficult and demanding task due to the NP-hardness of the problem and the stochastic nature of the diffusion processes. Despite several heuristics being introduced, they often fail in providing enough information on how the network topology affects the diffusion process, precious insights that could help researchers improve their seed set selection. In this paper, we present VAIM, a visual analytics system that supports users in analyzing, evaluating, and comparing information diffusion processes determined by different IM algorithms. Furthermore, VAIM provides useful insights that the analyst can use to modify the seed set of an IM algorithm, so to improve its influence spread. We assess our system by: (i) a qualitative evaluation based on a guided experiment with two domain experts on two different data sets; (ii) a quantitative estimation of the value of the proposed visualization through the ICE-T methodology by Wall et al. (IEEE TVCG - 2018). The twofold assessment indicates that VAIM effectively supports our target users in the visual analysis of the performance of IM algorithms.
Collapse
|
24
|
Piccolotto N, Bögl M, Muehlmann C, Nordhausen K, Filzmoser P, Miksch S. Visual Parameter Selection for Spatial Blind Source Separation. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2022; 41:157-168. [PMID: 36248193 PMCID: PMC9543588 DOI: 10.1111/cgf.14530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Analysis of spatial multivariate data, i.e., measurements at irregularly-spaced locations, is a challenging topic in visualization and statistics alike. Such data are inteGral to many domains, e.g., indicators of valuable minerals are measured for mine prospecting. Popular analysis methods, like PCA, often by design do not account for the spatial nature of the data. Thus they, together with their spatial variants, must be employed very carefully. Clearly, it is preferable to use methods that were specifically designed for such data, like spatial blind source separation (SBSS). However, SBSS requires two tuning parameters, which are themselves complex spatial objects. Setting these parameters involves navigating two large and interdependent parameter spaces, while also taking into account prior knowledge of the physical reality represented by the data. To support analysts in this process, we developed a visual analytics prototype. We evaluated it with experts in visualization, SBSS, and geochemistry. Our evaluations show that our interactive prototype allows to define complex and realistic parameter settings efficiently, which was so far impractical. Settings identified by a non-expert led to remarkable and surprising insights for a domain expert. Therefore, this paper presents important first steps to enable the use of a promising analysis method for spatial multivariate data.
Collapse
Affiliation(s)
- N Piccolotto
- TU Wien Institute of Visual Computing and Human-Centered Technology Austria
| | - M Bögl
- TU Wien Institute of Visual Computing and Human-Centered Technology Austria
| | - C Muehlmann
- TU Wien Institute of Statistics and Mathematical Methods in Economics Austria
| | | | - P Filzmoser
- TU Wien Institute of Statistics and Mathematical Methods in Economics Austria
| | - S Miksch
- TU Wien Institute of Visual Computing and Human-Centered Technology Austria
| |
Collapse
|
25
|
Lohfink AP, Anton SDD, Leitte H, Garth C. Knowledge Rocks: Adding Knowledge Assistance to Visualization Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1117-1127. [PMID: 34591761 DOI: 10.1109/tvcg.2021.3114687] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present Knowledge Rocks, an implementation strategy and guideline for augmenting visualization systems to knowledge-assisted visualization systems, as defined by the KAVA model. Visualization systems become more and more sophisticated. Hence, it is increasingly important to support users with an integrated knowledge base in making constructive choices and drawing the right conclusions. We support the effective reactivation of visualization software resources by augmenting them with knowledge-assistance. To provide a general and yet supportive implementation strategy, we propose an implementation process that bases on an application-agnostic architecture. This architecture is derived from existing knowledge-assisted visualization systems and the KAVA model. Its centerpiece is an ontology that is able to automatically analyze and classify input data, linked to a database to store classified instances. We discuss design decisions and advantages of the KR framework and illustrate its broad area of application in diverse integration possibilities of this architecture into an existing visualization system. In addition, we provide a detailed case study by augmenting an it-security system with knowledge-assistance facilities.
Collapse
|
26
|
Wall E, Narechania A, Coscia A, Paden J, Endert A. Left, Right, and Gender: Exploring Interaction Traces to Mitigate Human Biases. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:966-975. [PMID: 34596548 DOI: 10.1109/tvcg.2021.3114862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Human biases impact the way people analyze data and make decisions. Recent work has shown that some visualization designs can better support cognitive processes and mitigate cognitive biases (i.e., errors that occur due to the use of mental "shortcuts"). In this work, we explore how visualizing a user's interaction history (i.e., which data points and attributes a user has interacted with) can be used to mitigate potential biases that drive decision making by promoting conscious reflection of one's analysis process. Given an interactive scatterplot-based visualization tool, we showed interaction history in real-time while exploring data (by coloring points in the scatterplot that the user has interacted with), and in a summative format after a decision has been made (by comparing the distribution of user interactions to the underlying distribution of the data). We conducted a series of in-lab experiments and a crowd-sourced experiment to evaluate the effectiveness of interaction history interventions toward mitigating bias. We contextualized this work in a political scenario in which participants were instructed to choose a committee of 10 fictitious politicians to review a recent bill passed in the U.S. state of Georgia banning abortion after 6 weeks, where things like gender bias or political party bias may drive one's analysis process. We demonstrate the generalizability of this approach by evaluating a second decision making scenario related to movies. Our results are inconclusive for the effectiveness of interaction history (henceforth referred to as interaction traces) toward mitigating biased decision making. However, we find some mixed support that interaction traces, particularly in a summative format, can increase awareness of potential unconscious biases.
Collapse
|
27
|
Examining data visualization pitfalls in scientific publications. Vis Comput Ind Biomed Art 2021; 4:27. [PMID: 34714412 PMCID: PMC8556474 DOI: 10.1186/s42492-021-00092-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 09/19/2021] [Indexed: 11/10/2022] Open
Abstract
Data visualization blends art and science to convey stories from data via graphical representations. Considering different problems, applications, requirements, and design goals, it is challenging to combine these two components at their full force. While the art component involves creating visually appealing and easily interpreted graphics for users, the science component requires accurate representations of a large amount of input data. With a lack of the science component, visualization cannot serve its role of creating correct representations of the actual data, thus leading to wrong perception, interpretation, and decision. It might be even worse if incorrect visual representations were intentionally produced to deceive the viewers. To address common pitfalls in graphical representations, this paper focuses on identifying and understanding the root causes of misinformation in graphical representations. We reviewed the misleading data visualization examples in the scientific publications collected from indexing databases and then projected them onto the fundamental units of visual communication such as color, shape, size, and spatial orientation. Moreover, a text mining technique was applied to extract practical insights from common visualization pitfalls. Cochran's Q test and McNemar's test were conducted to examine if there is any difference in the proportions of common errors among color, shape, size, and spatial orientation. The findings showed that the pie chart is the most misused graphical representation, and size is the most critical issue. It was also observed that there were statistically significant differences in the proportion of errors among color, shape, size, and spatial orientation.
Collapse
|
28
|
Corvo A, Caballero HSG, Westenberg MA, van Driel MA, van Wijk JJ. Visual Analytics for Hypothesis-Driven Exploration in Computational Pathology. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3851-3866. [PMID: 32340951 DOI: 10.1109/tvcg.2020.2990336] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Recent advances in computational and algorithmic power are evolving the field of medical imaging rapidly. In cancer research, many new directions are sought to characterize patients with additional imaging features derived from radiology and pathology images. The emerging field of Computational Pathology targets the high-throughput extraction and analysis of the spatial distribution of cells from digital histopathology images. The associated morphological and architectural features allow researchers to quantify and characterize new imaging biomarkers for cancer diagnosis, prognosis, and treatment decisions. However, while the image feature space grows, exploration and analysis become more difficult and ineffective. There is a need for dedicated interfaces for interactive data manipulation and visual analysis of computational pathology and clinical data. For this purpose, we present IIComPath, a visual analytics approach that enables clinical researchers to formulate hypotheses and create computational pathology pipelines involving cohort construction, spatial analysis of image-derived features, and cohort analysis. We demonstrate our approach through use cases that investigate the prognostic value of current diagnostic features and new computational pathology biomarkers.
Collapse
|
29
|
Abstract
Exploratory data analysis (EDA) is an iterative process where data scientists interact with data to extract information about their quality and shape as well as derive knowledge and new insights into the related domain of the dataset. However, data scientists are rarely experienced domain experts who have tangible knowledge about a domain. Integrating domain knowledge into the analytic process is a complex challenge that usually requires constant communication between data scientists and domain experts. For this reason, it is desirable to reuse the domain insights from exploratory analyses in similar use cases. With this objective in mind, we present a conceptual system design on how to extract domain expertise while performing EDA and utilize it to guide other data scientists in similar use cases. Our system design introduces two concepts, interaction storage and analysis context storage, to record user interaction and interesting data points during an exploratory analysis. For new use cases, it identifies historical interactions from similar use cases and facilitates the recorded data to construct candidate interaction sequences and predict their potential insight—i.e., the insight generated from performing the sequence. Based on these predictions, the system recommends the sequences with the highest predicted insight to data scientist. We implement a prototype to test the general feasibility of our system design and enable further research in this area. Within the prototype, we present an exemplary use case that demonstrates the usefulness of recommended interactions. Finally, we give a critical reflection of our first prototype and discuss research opportunities resulting from our system design.
Collapse
|
30
|
Jena A, Butler M, Dwyer T, Ellis K, Engelke U, Kirkham R, Marriott K, Paris C, Rajamanickam V, Rhyne TM. The Next Billion Users of Visualization. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2021; 41:8-16. [PMID: 33729921 DOI: 10.1109/mcg.2020.3044071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We argue that visualization research has overwhelmingly focused on users from the economically developed world. However, billions of people around the world are rapidly emerging as new users of information technology. Most of the next billion users of visualization technologies will come from parts of the world that are extremely populous but historically ignored by the visualization research community. Their needs may be different to the types of users that researchers have targeted in the past, but, at the same time, they may have even more to gain in terms of access to data potentially affecting their quality of life. We propose a call to action for the visualization community to identify opportunities and use cases where users can benefit from visualization; develop universal design principles; extend evaluations by including the general population; and engage with a wider global population.
Collapse
|
31
|
Liu J, Dwyer T, Tack G, Gratzl S, Marriott K. Supporting the Problem-Solving Loop: Designing Highly Interactive Optimisation Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1764-1774. [PMID: 33112748 DOI: 10.1109/tvcg.2020.3030364] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Efficient optimisation algorithms have become important tools for finding high-quality solutions to hard, real-world problems such as production scheduling, timetabling, or vehicle routing. These algorithms are typically "black boxes" that work on mathematical models of the problem to solve. However, many problems are difficult to fully specify, and require a "human in the loop" who collaborates with the algorithm by refining the model and guiding the search to produce acceptable solutions. Recently, the Problem-Solving Loop was introduced as a high-level model of such interactive optimisation. Here, we present and evaluate nine recommendations for the design of interactive visualisation tools supporting the Problem-Solving Loop. They range from the choice of visual representation for solutions and constraints to the use of a solution gallery to support exploration of alternate solutions. We first examined the applicability of the recommendations by investigating how well they had been supported in previous interactive optimisation tools. We then evaluated the recommendations in the context of the vehicle routing problem with time windows (VRPTW). To do so we built a sophisticated interactive visual system for solving VRPTW that was informed by the recommendations. Ten participants then used this system to solve a variety of routing problems. We report on participant comments and interaction patterns with the tool. These showed the tool was regarded as highly usable and the results generally supported the usefulness of the underlying recommendations.
Collapse
|
32
|
Ivson P, Moreira A, Queiroz F, Santos W, Celes W. A Systematic Review of Visualization in Building Information Modeling. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3109-3127. [PMID: 30932840 DOI: 10.1109/tvcg.2019.2907583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Building Information Modeling (BIM) employs data-rich 3D CAD models for large-scale facility design, construction, and operation. These complex datasets contain a large amount and variety of information, ranging from design specifications to real-time sensor data. They are used by architects and engineers for various analysis and simulations throughout a facility's life cycle. Many techniques from different visualization fields could be used to analyze these data. However, the BIM domain still remains largely unexplored by the visualization community. The goal of this article is to encourage visualization researchers to increase their involvement with BIM. To this end, we present the results of a systematic review of visualization in current BIM practice. We use a novel taxonomy to identify main application areas and analyze commonly employed techniques. From this domain characterization, we highlight future research opportunities brought forth by the unique features of BIM. For instance, exploring the synergies between scientific and information visualization to integrate spatial and non-spatial data. We hope this article raises awareness to interesting new challenges the BIM domain brings to the visualization community.
Collapse
|
33
|
Ceneda D, Andrienko N, Andrienko G, Gschwandtner T, Miksch S, Piccolotto N, Schreck T, Streit M, Suschnigg J, Tominski C. Guide Me in Analysis: A Framework for Guidance Designers. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2020; 39:269-288. [PMID: 33041406 PMCID: PMC7539991 DOI: 10.1111/cgf.14017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Guidance is an emerging topic in the field of visual analytics. Guidance can support users in pursuing their analytical goals more efficiently and help in making the analysis successful. However, it is not clear how guidance approaches should be designed and what specific factors should be considered for effective support. In this paper, we approach this problem from the perspective of guidance designers. We present a framework comprising requirements and a set of specific phases designers should go through when designing guidance for visual analytics. We relate this process with a set of quality criteria we aim to support with our framework, that are necessary for obtaining a suitable and effective guidance solution. To demonstrate the practical usability of our methodology, we apply our framework to the design of guidance in three analysis scenarios and a design walk-through session. Moreover, we list the emerging challenges and report how the framework can be used to design guidance solutions that mitigate these issues.
Collapse
|
34
|
A. Leite R, Gschwandtner T, Miksch S, Gstrein E, Kuntner J. NEVA: Visual Analytics to Identify Fraudulent Networks. COMPUTER GRAPHICS FORUM : JOURNAL OF THE EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 2020; 39:344-359. [PMID: 33132468 PMCID: PMC7584106 DOI: 10.1111/cgf.14042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 02/21/2020] [Indexed: 06/11/2023]
Abstract
Trust-ability, reputation, security and quality are the main concerns for public and private financial institutions. To detect fraudulent behaviour, several techniques are applied pursuing different goals. For well-defined problems, analytical methods are applicable to examine the history of customer transactions. However, fraudulent behaviour is constantly changing, which results in ill-defined problems. Furthermore, analysing the behaviour of individual customers is not sufficient to detect more complex structures such as networks of fraudulent actors. We propose NEVA (Network dEtection with Visual Analytics), a Visual Analytics exploration environment to support the analysis of customer networks in order to reduce false-negative and false-positive alarms of frauds. Multiple coordinated views allow for exploring complex relations and dependencies of the data. A guidance-enriched component for network pattern generation, detection and filtering support exploring and analysing the relationships of nodes on different levels of complexity. In six expert interviews, we illustrate the applicability and usability of NEVA.
Collapse
Affiliation(s)
- Roger A. Leite
- Faculty of InformaticsVienna University of Technology (TU Wien)ViennaAustria
| | | | - Silvia Miksch
- Faculty of InformaticsVienna University of Technology (TU Wien)ViennaAustria
| | | | | |
Collapse
|
35
|
Han Q, Thom D, John M, Koch S, Heimerl F, Ertl T. Visual Quality Guidance for Document Exploration with Focus+Context Techniques. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2715-2731. [PMID: 30676964 DOI: 10.1109/tvcg.2019.2895073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Magic lens based focus+context techniques are powerful means for exploring document spatializations. Typically, they only offer additional summarized or abstracted views on focused documents. As a consequence, users might miss important information that is either not shown in aggregated form or that never happens to get focused. In this work, we present the design process and user study results for improving a magic lens based document exploration approach with exemplary visual quality cues to guide users in steering the exploration and support them in interpreting the summarization results. We contribute a thorough analysis of potential sources of information loss involved in these techniques, which include the visual spatialization of text documents, user-steered exploration, and the visual summarization. With lessons learned from previous research, we highlight the various ways those information losses could hamper the exploration. Furthermore, we formally define measures for the aforementioned different types of information losses and bias. Finally, we present the visual cues to depict these quality measures that are seamlessly integrated into the exploration approach. These visual cues guide users during the exploration and reduce the risk of misinterpretation and accelerate insight generation. We conclude with the results of a controlled user study and discuss the benefits and challenges of integrating quality guidance in exploration techniques.
Collapse
|
36
|
Schlachter M, Preim B, Bühler K, Raidou RG. Principles of Visualization in Radiation Oncology. Oncology 2020; 98:412-422. [PMID: 31940605 DOI: 10.1159/000504940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Accepted: 11/21/2019] [Indexed: 11/19/2022]
Abstract
BACKGROUND Medical visualization employs elements from computer graphics to create meaningful, interactive visual representations of medical data, and it has become an influential field of research for many advanced applications like radiation oncology, among others. Visual representations employ the user's cognitive capabilities to support and accelerate diagnostic, planning, and quality assurance workflows based on involved patient data. SUMMARY This article discusses the basic underlying principles of visualization in the application domain of radiation oncology. The main visualization strategies, such as slice-based representations and surface and volume rendering are presented. Interaction topics, i.e., the combination of visualization and automated analysis methods, are also discussed. Key Messages: Slice-based representations are a common approach in radiation oncology, while volume visualization also has a long-standing history in the field. Perception within both representations can benefit further from advanced approaches, such as image fusion and multivolume or hybrid rendering. While traditional slice-based and volume representations keep evolving, the dimensionality and complexity of medical data are also increasing. To address this, visual analytics strategies are valuable, particularly for cohort or uncertainty visualization. Interactive visual analytics approaches represent a new opportunity to integrate knowledgeable experts and their cognitive abilities in exploratory processes which cannot be conducted by solely automatized methods.
Collapse
Affiliation(s)
| | - Bernhard Preim
- University of Magdeburg, Magdeburg, Germany.,Research Campus STIMULATE, Magdeburg, Germany
| | | | | |
Collapse
|
37
|
Walch A, Schwarzler M, Luksch C, Eisemann E, Gschwandtner T. LightGuider: Guiding Interactive Lighting Design using Suggestions, Provenance, and Quality Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:569-578. [PMID: 31443004 DOI: 10.1109/tvcg.2019.2934658] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
Collapse
|
38
|
Krueger R, Beyer J, Jang WD, Kim NW, Sokolov A, Sorger PK, Pfister H. Facetto: Combining Unsupervised and Supervised Learning for Hierarchical Phenotype Analysis in Multi-Channel Image Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:227-237. [PMID: 31514138 PMCID: PMC7045445 DOI: 10.1109/tvcg.2019.2934547] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Facetto is a scalable visual analytics application that is used to discover single-cell phenotypes in high-dimensional multi-channel microscopy images of human tumors and tissues. Such images represent the cutting edge of digital histology and promise to revolutionize how diseases such as cancer are studied, diagnosed, and treated. Highly multiplexed tissue images are complex, comprising 109 or more pixels, 60-plus channels, and millions of individual cells. This makes manual analysis challenging and error-prone. Existing automated approaches are also inadequate, in large part, because they are unable to effectively exploit the deep knowledge of human tissue biology available to anatomic pathologists. To overcome these challenges, Facetto enables a semi-automated analysis of cell types and states. It integrates unsupervised and supervised learning into the image and feature exploration process and offers tools for analytical provenance. Experts can cluster the data to discover new types of cancer and immune cells and use clustering results to train a convolutional neural network that classifies new cells accordingly. Likewise, the output of classifiers can be clustered to discover aggregate patterns and phenotype subsets. We also introduce a new hierarchical approach to keep track of analysis steps and data subsets created by users; this assists in the identification of cell types. Users can build phenotype trees and interact with the resulting hierarchical structures of both high-dimensional feature and image spaces. We report on use-cases in which domain scientists explore various large-scale fluorescence imaging datasets. We demonstrate how Facetto assists users in steering the clustering and classification process, inspecting analysis results, and gaining new scientific insights into cancer biology.
Collapse
|
39
|
Behrisch M, Streeb D, Stoffel F, Seebacher D, Matejek B, Weber SH, Mittelstadt S, Pfister H, Keim D. Commercial Visual Analytics Systems-Advances in the Big Data Analytics Field. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:3011-3031. [PMID: 30059307 DOI: 10.1109/tvcg.2018.2859973] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Five years after the first state-of-the-art report on Commercial Visual Analytics Systems we present a reevaluation of the Big Data Analytics field. We build on the success of the 2012 survey, which was influential even beyond the boundaries of the InfoVis and Visual Analytics (VA) community. While the field has matured significantly since the original survey, we find that innovation and research-driven development are increasingly sacrificed to satisfy a wide range of user groups. We evaluate new product versions on established evaluation criteria, such as available features, performance, and usability, to extend on and assure comparability with the previous survey. We also investigate previously unavailable products to paint a more complete picture of the commercial VA landscape. Furthermore, we introduce novel measures, like suitability for specific user groups and the ability to handle complex data types, and undertake a new case study to highlight innovative features. We explore the achievements in the commercial sector in addressing VA challenges and propose novel developments that should be on systems' roadmaps in the coming years.
Collapse
|
40
|
Han D, Pan J, Guo F, Luo X, Wu Y, Zheng W, Chen W. RankBrushers: interactive analysis of temporal ranking ensembles. J Vis (Tokyo) 2019. [DOI: 10.1007/s12650-019-00598-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
41
|
Eason T, Chuang WC, Sundstrom S, Cabezas H. An information theory-based approach to assessing spatial patterns in complex systems. ENTROPY (BASEL, SWITZERLAND) 2019; 21:182. [PMID: 31402835 PMCID: PMC6688651 DOI: 10.3390/e21020182] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 02/14/2019] [Indexed: 01/14/2023]
Abstract
Given the intensity and frequency of environmental change, the linked and cross-scale nature of social-ecological systems, and the proliferation of big data, methods that can help synthesize complex system behavior over a geographical area are of great value. Fisher information evaluates order in data and has been established as a robust and effective tool for capturing changes in system dynamics, including the detection of regimes and regime shifts. Methods developed to compute Fisher information can accommodate multivariate data of various types and requires no a priori decisions about system drivers, making it a unique and powerful tool. However, the approach has primarily been used to evaluate temporal patterns. In its sole application to spatial data, Fisher information successfully detected regimes in terrestrial and aquatic systems over transects. Although the selection of adjacently positioned sampling stations provided a natural means of ordering the data, such an approach limits the types of questions that can be answered in a spatial context. Here, we expand the approach to develop a method for more fully capturing spatial dynamics. Results reflect changes in the index that correspond with geographical patterns and demonstrate the utility of the method in uncovering hidden spatial trends in complex systems.
Collapse
Affiliation(s)
- Tarsha Eason
- National Risk Management Research Laboratory, U.S. Environmental Protection Agency, Research Triangle Park, NC 27711, USA
| | - Wen-Ching Chuang
- National Research Council, U.S. Environmental Protection Agency, 26 W. Martin Luther King Drive, Cincinnati, OH 45268, USA
| | - Shana Sundstrom
- School of Natural Resources, University of Nebraska-Lincoln, 103 Hardin Hall, 3310 Holdrege St., Lincoln, NE 68583, USA
| | - Heriberto Cabezas
- National Risk Management Research Laboratory, U.S. Environmental Protection Agency, Cincinnati, OH 45268, USA
- Institute for Process Systems Engineering and Sustainability, Pazmany Peter Catholic University, Szentkiralyi utca 28, H-1088 Budapest, Hungary
| |
Collapse
|
42
|
Westendorf L, Shaer O, Pollalis C, Verish C, Nov O, Ball MP. Exploring Genetic Data Across Individuals: Design and Evaluation of a Novel Comparative Report Tool. J Med Internet Res 2018; 20:e10297. [PMID: 30249582 PMCID: PMC6231826 DOI: 10.2196/10297] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/10/2018] [Accepted: 07/16/2018] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND The growth in the availability of personal genomic data to nonexperts poses multiple challenges to human-computer interaction research; data are highly sensitive, complex, and have health implications for individuals and families. However, there has been little research on how nonexpert users explore their genomic data. OBJECTIVE We focus on how to support nonexperts in exploring and comparing their own personal genomic report with those of other people. We designed and evaluated CrossGenomics, a novel tool for comparing personal genetic reports, which enables exploration of shared and unshared genetic variants. Focusing on communicating comparative impact, rarity, and certainty, we evaluated alternative novel interactive prototypes. METHODS We conducted 3 user studies. The first focuses on assessing the usability and understandability of a prototype that facilitates the comparison of reports from 2 family members. Following a design iteration, we studied how various prototypes support the comparison of genetic reports of a 4-person family. Finally, we evaluated the needs of early adopters-people who share their genetic reports publicly for comparing their genetic reports with that of others. RESULTS In the first study, sunburst- and Venn-based comparisons of two genomes led to significantly higher domain comprehension, compared with the linear comparison and with the commonly used tabular format. However, results show gaps between objective and subjective comprehension, as sunburst users reported significantly lower perceived understanding and higher levels of confusion than the users of the tabular report. In the second study, users who were allowed to switch between the different comparison views presented higher comprehension levels, as well as more complex reasoning than users who were limited to a single comparison view. In the third study, 35% (17/49) reported learning something new from comparing their own data with another person's data. Users indicated that filtering and toggling between comparison views were the most useful features. CONCLUSIONS Our findings (1) highlight features and visualizations that show strengths in facilitating user comprehension of genomic data, (2) demonstrate the value of affording users the flexibility to examine the same report using multiple views, and (3) emphasize users' needs in comparison of genomic data. We conclude with design implications for engaging nonexperts with complex multidimensional genomic data.
Collapse
Affiliation(s)
- Lauren Westendorf
- Human-Computer Interaction Lab, Computer Science, Wellesley College, Wellesley, MA, United States
| | - Orit Shaer
- Human-Computer Interaction Lab, Computer Science, Wellesley College, Wellesley, MA, United States
| | - Christina Pollalis
- Human-Computer Interaction Lab, Computer Science, Wellesley College, Wellesley, MA, United States
| | - Clarissa Verish
- Human-Computer Interaction Lab, Computer Science, Wellesley College, Wellesley, MA, United States
| | - Oded Nov
- Department of Technology Management & Innovation, New York University, Tandon School of Engineering, Brooklyn, NY, United States
| | | |
Collapse
|
43
|
Miao H, Klein T, Kouřil D, Mindek P, Schatz K, Gröller ME, Kozlíková B, Isenberg T, Viola I. Multiscale Molecular Visualization. J Mol Biol 2018; 431:1049-1070. [PMID: 30227136 DOI: 10.1016/j.jmb.2018.09.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 08/28/2018] [Accepted: 09/05/2018] [Indexed: 02/07/2023]
Abstract
We provide a high-level survey of multiscale molecular visualization techniques, with a focus on application-domain questions, challenges, and tasks. We provide a general introduction to molecular visualization basics and describe a number of domain-specific tasks that drive this work. These tasks, in turn, serve as the general structure of the following survey. First, we discuss methods that support the visual analysis of molecular dynamics simulations. We discuss, in particular, visual abstraction and temporal aggregation. In the second part, we survey multiscale approaches that support the design, analysis, and manipulation of DNA nanostructures and related concepts for abstraction, scale transition, scale-dependent modeling, and navigation of the resulting abstraction spaces. In the third part of the survey, we showcase approaches that support interactive exploration within large structural biology assemblies up to the size of bacterial cells. We describe fundamental rendering techniques as well as approaches for element instantiation, visibility management, visual guidance, camera control, and support of depth perception. We close the survey with a brief listing of important tools that implement many of the discussed approaches and a conclusion that provides some research challenges in the field.
Collapse
|
44
|
Law PM, Basole RC, Wu Y. Duet: Helping Data Analysis Novices Conduct Pairwise Comparisons by Minimal Specification. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:427-437. [PMID: 30130204 DOI: 10.1109/tvcg.2018.2864526] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Data analysis novices often encounter barriers in executing low-level operations for pairwise comparisons. They may also run into barriers in interpreting the artifacts (e.g., visualizations) created as a result of the operations. We developed Duet, a visual analysis system designed to help data analysis novices conduct pairwise comparisons by addressing execution and interpretation barriers. To reduce the barriers in executing low-level operations during pairwise comparison, Duet employs minimal specification: when one object group (i.e. a group of records in a data table) is specified, Duet recommends object groups that are similar to or different from the specified one; when two object groups are specified, Duet recommends similar and different attributes between them. To lower the barriers in interpreting its recommendations, Duet explains the recommended groups and attributes using both visualizations and textual descriptions. We conducted a qualitative evaluation with eight participants to understand the effectiveness of Duet. The results suggest that minimal specification is easy to use and Duet's explanations are helpful for interpreting the recommendations despite some usability issues.
Collapse
|
45
|
Sacha D, Kraus M, Bernard J, Behrisch M, Schreck T, Asano Y, Keim DA. SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:120-130. [PMID: 28866559 DOI: 10.1109/tvcg.2017.2744805] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.
Collapse
|
46
|
Pezzotti N, Hollt T, Van Gemert J, Lelieveldt BPF, Eisemann E, Vilanova A. DeepEyes: Progressive Visual Analytics for Designing Deep Neural Networks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:98-108. [PMID: 28866543 DOI: 10.1109/tvcg.2017.2744358] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.
Collapse
|
47
|
|