1
|
Braun D, Chang R, Gleicher M, von Landesberger T. Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:787-797. [PMID: 39255144 DOI: 10.1109/tvcg.2024.3456305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of s lope (i.e., accepting a shown line). Notably, we found bias toward slopes that are "too steep" in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.
Collapse
|
2
|
Chen M, Liu Y, Wall E. Unmasking Dunning-Kruger Effect in Visual Reasoning & Judgment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:743-753. [PMID: 39288064 DOI: 10.1109/tvcg.2024.3456326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/19/2024]
Abstract
The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and judgment tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual reasoning and judgment tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits. All materials and analyses are in supplemental materials: https://github.com/CAV-Lab/DKE_supplemental.git.
Collapse
|
3
|
Koonchanok R, Papka ME, Reda K. Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:754-764. [PMID: 39259631 DOI: 10.1109/tvcg.2024.3456182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Prior research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals' accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making. The data and materials for this paper are available at https://osf.io/qmfv6.
Collapse
|
4
|
Masotina M, Musi E, Yates S. Relevance theory for mapping cognitive biases in fact-checking: an argumentative approach. Front Psychol 2024; 15:1468879. [PMID: 39726627 PMCID: PMC11670370 DOI: 10.3389/fpsyg.2024.1468879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 11/13/2024] [Indexed: 12/28/2024] Open
Abstract
In the fast-paced, densely populated information landscape shaped by digitization, distinguishing information from misinformation is critical. Fact-checkers are effective in fighting fake news but face challenges such as cognitive overload and time pressure, which increase susceptibility to cognitive biases. Establishing standards to mitigate these biases can improve the quality of fact-checks, bolster audience trust, and protect against reputation attacks from disinformation actors. While previous research has focused on audience biases, we propose a novel approach grounded on relevance theory and the argumentum model of topics to identify (i) the biases intervening in the fact-checking process, (ii) their triggers, and (iii) at what level of reasoning they act. We showcase the predictive power of our approach through a multimethod case study involving a semi-automatic literature review, a fact-checking simulation with 12 news practitioners, and an online survey involving 40 journalists and fact-checkers. The study highlights the distinction between biases triggered by relevance by effort and effect, offering a taxonomy of cognitive biases and a method to map them within decision-making processes. These insights can inform trainings to enhance fact-checkers' critical thinking skills, improving the quality and trustworthiness of fact-checking practices.
Collapse
Affiliation(s)
- Mariavittoria Masotina
- Department of Communication and Media, University of Liverpool, Liverpool, United Kingdom
- Digital Media and Society Institute, University of Liverpool, Liverpool, United Kingdom
| | - Elena Musi
- Department of Communication and Media, University of Liverpool, Liverpool, United Kingdom
- Digital Media and Society Institute, University of Liverpool, Liverpool, United Kingdom
| | - Simeon Yates
- Department of Communication and Media, University of Liverpool, Liverpool, United Kingdom
- Digital Media and Society Institute, University of Liverpool, Liverpool, United Kingdom
| |
Collapse
|
5
|
Oral B, Dragicevic P, Telea A, Dimara E. Decoupling Judgment and Decision Making: A Tale of Two Tails. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6928-6940. [PMID: 38145516 DOI: 10.1109/tvcg.2023.3346640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms "judgment" and "decision making" are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N = 601) to investigate if the task (judgment versus decision making), the scenario (sports versus humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.
Collapse
|
6
|
Holder E, Bearfield CX. Polarizing Political Polls: How Visualization Design Choices Can Shape Public Opinion and Increase Political Polarization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1446-1456. [PMID: 37871081 DOI: 10.1109/tvcg.2023.3326512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
While we typically focus on data visualization as a tool for facilitating cognitive tasks (e.g. learning facts, making decisions), we know relatively little about their second-order impacts on our opinions, attitudes, and values. For example, could design or framing choices interact with viewers' social cognitive biases in ways that promote political polarization? When reporting on U.S. attitudes toward public policies, it is popular to highlight the gap between Democrats and Republicans (e.g. with blue vs red connected dot plots). But these charts may encourage social-normative conformity, influencing viewers' attitudes to match the divided opinions shown in the visualization. We conducted three experiments examining visualization framing in the context of social conformity and polarization. Crowdworkers viewed charts showing simulated polling results for public policy proposals. We varied framing (aggregating data as non-partisan "All US Adults," or partisan "Democrat" / "Republican") and the visualized groups' support levels. Participants then reported their own support for each policy. We found that participants' attitudes biased significantly toward the group attitudes shown in the stimuli and this can increase inter-party attitude divergence. These results demonstrate that data visualizations can induce social conformity and accelerate political polarization. Choosing to visualize partisan divisions can divide us further.
Collapse
|
7
|
Flemming DJ, White C, Fox E, Fanburg-Smith J, Cochran E. Diagnostic errors in musculoskeletal oncology and possible mitigation strategies. Skeletal Radiol 2023; 52:493-503. [PMID: 36048252 DOI: 10.1007/s00256-022-04166-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 08/04/2022] [Accepted: 08/16/2022] [Indexed: 02/02/2023]
Abstract
The objective of this paper is to explore sources of diagnostic error in musculoskeletal oncology and potential strategies for mitigating them using case examples. As musculoskeletal tumors are often obvious, the diagnostic errors in musculoskeletal oncology are frequently cognitive. In our experience, the most encountered cognitive biases in musculoskeletal oncologic imaging are as follows: (1) anchoring bias, (2) premature closure, (3) hindsight bias, (4) availability bias, and (5) alliterative bias. Anchoring bias results from failing to adjust an early impression despite receiving additional contrary information. Premature closure is the cognitive equivalent of "satisfaction of search." Hindsight bias occurs when we retrospectively overestimate the likelihood of correctly interpreting the examination prospectively. In availability bias, the radiologist judges the probability of a diagnosis based on which diagnosis is most easily recalled. Finally, alliterative bias occurs when a prior radiologist's impression overly influences the diagnostic thinking of another radiologist on a subsequent exam. In addition to cognitive biases, it is also important for radiologists to acknowledge their feelings when making a diagnosis to recognize positive and negative impact of affect on decision making. While errors decrease with radiologist experience, the lack of application of medical knowledge is often the primary source of error rather than a deficiency of knowledge, emphasizing the need to foster clinical reasoning skills and assist cognition. Possible solutions for reducing error exist at both the individual and the system level and include (1) improvement in knowledge and experience, (2) improvement in clinical reasoning and decision-making skills, and (3) improvement in assisting cognition.
Collapse
Affiliation(s)
- Donald J Flemming
- Department of Radiology, Penn State Health Milton S. Hershey Medical Center, 500 University Drive H066, Hershey, PA, 17033, USA.
| | - Carissa White
- Department of Radiology, Penn State Health Milton S. Hershey Medical Center, 500 University Drive H066, Hershey, PA, 17033, USA
| | - Edward Fox
- Department of Orthopaedics, Penn State Health Milton S. Hershey Medical Center, Hershey, PA, USA
| | - Julie Fanburg-Smith
- Department of Pathology, Penn State Health Milton S. Hershey Medical Center, Hershey, PA, USA
| | - Eric Cochran
- Department of Pathology, Penn State Health Milton S. Hershey Medical Center, Hershey, PA, USA
| |
Collapse
|
8
|
Ha S, Monadjemi S, Garnett R, Ottley A. A Unified Comparison of User Modeling Techniques for Predicting Data Interaction and Detecting Exploration Bias. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:483-492. [PMID: 36155457 DOI: 10.1109/tvcg.2022.3209476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. However, the community lacks a rigorous evaluation and comparison of these existing techniques. As a result, there is limited guidance on which method to use and when. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.
Collapse
|
9
|
Dimara E, Zhang H, Tory M, Franconeri S. The Unmet Data Visualization Needs of Decision Makers Within Organizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4101-4112. [PMID: 33872153 DOI: 10.1109/tvcg.2021.3074023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
When an organization chooses one course of action over alternatives, this task typically falls on a decision maker with relevant knowledge, experience, and understanding of context. Decision makers rely on data analysis, which is either delegated to analysts, or done on their own. Often the decision maker combines data, likely uncertain or incomplete, with non-formalized knowledge within a multi-objective problem space, weighing the recommendations of analysts within broader contexts and goals. As most past research in visual analytics has focused on understanding the needs and challenges of data analysts, less is known about the tasks and challenges of organizational decision makers, and how visualization support tools might help. Here we characterize the decision maker as a domain expert, review relevant literature in management theories, and report the results of an empirical survey and interviews with people who make organizational decisions. We identify challenges and opportunities for novel visualization tools, including trade-off overviews, scenario-based analysis, interrogation tools, flexible data input and collaboration support. Our findings stress the need to expand visualization design beyond data analysis into tools for information management.
Collapse
|
10
|
El-Assady M, Moruzzi C. Which Biases and Reasoning Pitfalls Do Explanations Trigger? Decomposing Communication Processes in Human-AI Interaction. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:11-23. [PMID: 36094981 DOI: 10.1109/mcg.2022.3200328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Collaborative human-AI problem-solving and decision making rely on effective communications between both agents. Such communication processes comprise explanations and interactions between a sender and a receiver. Investigating these dynamics is crucial to avoid miscommunication problems. Hence, in this article, we propose a communication dynamics model, examining the impact of the sender's explanation intention and strategy on the receiver's perception of explanation effects. We further present potential biases and reasoning pitfalls with the aim of contributing to the design of hybrid intelligence systems. Finally, we propose six desiderata for human-centered explainable AI and discuss future research opportunities.
Collapse
|
11
|
Procopio M, Mosca A, Scheidegger C, Wu E, Chang R. Impact of Cognitive Biases on Progressive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3093-3112. [PMID: 33434132 DOI: 10.1109/tvcg.2021.3051013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Progressive visualization is fast becoming a technique in the visualization community to help users interact with large amounts of data. With progressive visualization, users can examine intermediate results of complex or long running computations, without waiting for the computation to complete. While this has shown to be beneficial to users, recent research has identified potential risks. For example, users may misjudge the uncertainty in the intermediate results and draw incorrect conclusions or see patterns that are not present in the final results. In this article, we conduct a comprehensive set of studies to quantify the advantages and limitations of progressive visualization. Based on a recent report by Micallef et al., we examine four types of cognitive biases that can occur with progressive visualization: uncertainty bias, illusion bias, control bias, and anchoring bias. The results of the studies suggest a cautious but promising use of progressive visualization - while there can be significant savings in task completion time, accuracy can be negatively affected in certain conditions. These findings confirm earlier reports of the benefits and drawbacks of progressive visualization and that continued research into mitigating the effects of cognitive biases is necessary.
Collapse
|
12
|
Dimara E, Stasko J. A Critical Reflection on Visualization Research: Where Do Decision Making Tasks Hide? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1128-1138. [PMID: 34587049 DOI: 10.1109/tvcg.2021.3114813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
It has been widely suggested that a key goal of visualization systems is to assist decision making, but is this true? We conduct a critical investigation on whether the activity of decision making is indeed central to the visualization domain. By approaching decision making as a user task, we explore the degree to which decision tasks are evident in visualization research and user studies. Our analysis suggests that decision tasks are not commonly found in current visualization task taxonomies and that the visualization field has yet to leverage guidance from decision theory domains on how to study such tasks. We further found that the majority of visualizations addressing decision making were not evaluated based on their ability to assist decision tasks. Finally, to help expand the impact of visual analytics in organizational as well as casual decision making activities, we initiate a research agenda on how decision making assistance could be elevated throughout visualization research.
Collapse
|
13
|
Impacts of Visualizations on Decoy Effects. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph182312674. [PMID: 34886398 PMCID: PMC8657019 DOI: 10.3390/ijerph182312674] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/25/2021] [Accepted: 11/26/2021] [Indexed: 11/28/2022]
Abstract
The decoy effect is a well-known, intriguing decision-making bias that is often exploited by marketing practitioners to steer consumers towards a desired purchase outcome. It demonstrates that an inclusion of an alternative in the choice set can alter one’s preference among the other choices. Although this decoy effect has been universally observed in the real world and also studied by many economists and psychologists, little is known about how to mitigate the decoy effect and help consumers make informed decisions. In this study, we conducted two experiments: a quantitative experiment with crowdsourcing and a qualitative interview study—first, the crowdsourcing experiment to see if visual interfaces can help alleviate this cognitive bias. Four types of visualizations, one-sided bar chart, two-sided bar charts, scatterplots, and parallel-coordinate plots, were evaluated with four different types of scenarios. The results demonstrated that the two types of bar charts were effective in decreasing the decoy effect. Second, we conducted a semi-structured interview to gain a deeper understanding of the decision-making strategies while making a choice. We believe that the results have an implication on showing how visualizations can have an impact on the decision-making process in our everyday life.
Collapse
|
14
|
Making time/breaking time: critical literacy and politics of time in data visualisation. JOURNAL OF DOCUMENTATION 2021. [DOI: 10.1108/jd-12-2020-0210] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeRepresentations of time are commonly used to construct narratives in visualisations of data. However, since time is a value-laden concept, and no representation can provide a full, objective account of “temporal reality”, they are also biased and political: reproducing and reinforcing certain views and values at the expense of alternative ones. This conceptual paper aims to explore expressions of temporal bias and politics in data visualisation, along with possibly mitigating user approaches and design strategies.Design/methodology/approachThis study presents a theoretical framework rooted in a sociotechnical view of representations as biased and political, combined with perspectives from critical literacy, radical literacy and critical design. The framework provides a basis for discussion of various types and effects of temporal bias in visualisation. Empirical examples from previous research and public resources illustrate the arguments.FindingsFour types of political effects of temporal bias in visualisations are presented, expressed as limitation of view, disregard of variation, oppression of social groups and misrepresentation of topic and suggest that appropriate critical and radical literacy approaches require users and designers to critique, contextualise, counter and cross beyond expressions of the same. Supporting critical design strategies involve the inclusion of multiple datasets and representations; broad access to flexible tools; and inclusive participation of marginalised groups.Originality/valueThe paper draws attention to a vital, yet little researched problem of temporal representation in visualisations of data. It offers a pioneering bridging of critical literacy, radical literacy and critical design and emphasises mutual rather than contradictory interests of the empirical sciences and humanities.
Collapse
|
15
|
Chatzimparmpas A, Martins RM, Kucher K, Kerren A. StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1547-1557. [PMID: 33048687 DOI: 10.1109/tvcg.2020.3030352] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In machine learning (ML), ensemble methods-such as bagging, boosting, and stacking-are widely-established approaches that regularly achieve top-notch predictive performance. Stacking (also called "stacked generalization") is an ensemble method that combines heterogeneous base models, arranged in at least one layer, and then employs another metamodel to summarize the predictions of those models. Although it may be a highly-effective approach for increasing the predictive performance of ML, generating a stack of models from scratch can be a cumbersome trial-and-error process. This challenge stems from the enormous space of available solutions, with different sets of data instances and features that could be used for training, several algorithms to choose from, and instantiations of these algorithms using diverse parameters (i.e., models) that perform differently according to various metrics. In this work, we present a knowledge generation model, which supports ensemble learning with the use of visualization, and a visual analytics system for stacked generalization. Our system, StackGenVis, assists users in dynamically adapting performance metrics, managing data instances, selecting the most important features for a given data set, choosing a set of top-performant and diverse algorithms, and measuring the predictive performance. In consequence, our proposed tool helps users to decide between distinct models and to reduce the complexity of the resulting stack by removing overpromising and underperforming models. The applicability and effectiveness of StackGenVis are demonstrated with two use cases: a real-world healthcare data set and a collection of data related to sentiment/stance detection in texts. Finally, the tool has been evaluated through interviews with three ML experts.
Collapse
|
16
|
Boukhelifa N, Bezerianos A, Chang R, Collins C, Drucker S, Endert A, Hullman J, North C, Sedlmair M, Rhyne TM. Challenges in Evaluating Interactive Visual Machine Learning Systems. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2020; 40:88-96. [PMID: 33095702 DOI: 10.1109/mcg.2020.3017064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In interactive visual machine learning (IVML), humans and machine learning algorithms collaborate to achieve tasks mediated by interactive visual interfaces. This human-in-the-loop approach to machine learning brings forth not only numerous intelligibility, trust, and usability issues, but also many open questions with respect to the evaluation of the IVML system, both as separate components, and as a holistic entity that includes both human and machine intelligence. This article describes the challenges and research gaps identified in an IEEE VIS workshop on the evaluation of IVML systems.
Collapse
|
17
|
Korporaal M, Ruginski IT, Fabrikant SI. Effects of Uncertainty Visualization on Map-Based Decision Making Under Time Pressure. FRONTIERS IN COMPUTER SCIENCE 2020. [DOI: 10.3389/fcomp.2020.00032] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
18
|
Han Q, Thom D, John M, Koch S, Heimerl F, Ertl T. Visual Quality Guidance for Document Exploration with Focus+Context Techniques. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2715-2731. [PMID: 30676964 DOI: 10.1109/tvcg.2019.2895073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Magic lens based focus+context techniques are powerful means for exploring document spatializations. Typically, they only offer additional summarized or abstracted views on focused documents. As a consequence, users might miss important information that is either not shown in aggregated form or that never happens to get focused. In this work, we present the design process and user study results for improving a magic lens based document exploration approach with exemplary visual quality cues to guide users in steering the exploration and support them in interpreting the summarization results. We contribute a thorough analysis of potential sources of information loss involved in these techniques, which include the visual spatialization of text documents, user-steered exploration, and the visual summarization. With lessons learned from previous research, we highlight the various ways those information losses could hamper the exploration. Furthermore, we formally define measures for the aforementioned different types of information losses and bias. Finally, we present the visual cues to depict these quality measures that are seamlessly integrated into the exploration approach. These visual cues guide users during the exploration and reduce the risk of misinterpretation and accelerate insight generation. We conclude with the results of a controlled user study and discuss the benefits and challenges of integrating quality guidance in exploration techniques.
Collapse
|
19
|
Representing Data Visualization Goals and Tasks through Meta-Modeling to Tailor Information Dashboards. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10072306] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Information dashboards are everywhere. They support knowledge discovery in a huge variety of contexts and domains. Although powerful, these tools can be complex, not only for the end-users but also for developers and designers. Information dashboards encode complex datasets into different visual marks to ease knowledge discovery. Choosing a wrong design could compromise the entire dashboard’s effectiveness, selecting the appropriate encoding or configuration for each potential context, user, or data domain is a crucial task. For these reasons, there is a necessity to automatize the recommendation of visualizations and dashboard configurations to deliver tools adapted to their context. Recommendations can be based on different aspects, such as user characteristics, the data domain, or the goals and tasks that will be achieved or carried out through the visualizations. This work presents a dashboard meta-model that abstracts all these factors and the integration of a visualization task taxonomy to account for the different actions that can be performed with information dashboards. This meta-model has been used to design a domain specific language to specify dashboards requirements in a structured way. The ultimate goal is to obtain a dashboard generation pipeline to deliver dashboards adapted to any context, such as the educational context, in which a lot of data are generated, and there are several actors involved (students, teachers, managers, etc.) that would want to reach different insights regarding their learning performance or learning methodologies.
Collapse
|
20
|
Borland D, Wang W, Zhang J, Shrestha J, Gotz D. Selection Bias Tracking and Detailed Subset Comparison for High-Dimensional Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:429-439. [PMID: 31442975 DOI: 10.1109/tvcg.2019.2934209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The collection of large, complex datasets has become common across a wide variety of domains. Visual analytics tools increasingly play a key role in exploring and answering complex questions about these large datasets. However, many visualizations are not designed to concurrently visualize the large number of dimensions present in complex datasets (e.g. tens of thousands of distinct codes in an electronic health record system). This fact, combined with the ability of many visual analytics systems to enable rapid, ad-hoc specification of groups, or cohorts, of individuals based on a small subset of visualized dimensions, leads to the possibility of introducing selection bias-when the user creates a cohort based on a specified set of dimensions, differences across many other unseen dimensions may also be introduced. These unintended side effects may result in the cohort no longer being representative of the larger population intended to be studied, which can negatively affect the validity of subsequent analyses. We present techniques for selection bias tracking and visualization that can be incorporated into high-dimensional exploratory visual analytics systems, with a focus on medical data with existing data hierarchies. These techniques include: (1) tree-based cohort provenance and visualization, including a user-specified baseline cohort that all other cohorts are compared against, and visual encoding of cohort "drift", which indicates where selection bias may have occurred, and (2) a set of visualizations, including a novel icicle-plot based visualization, to compare in detail the per-dimension differences between the baseline and a user-specified focus cohort. These techniques are integrated into a medical temporal event sequence visual analytics tool. We present example use cases and report findings from domain expert user interviews.
Collapse
|