1
|
Chen M, Liu Y, Wall E. Unmasking Dunning-Kruger Effect in Visual Reasoning & Judgment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:743-753. [PMID: 39288064 DOI: 10.1109/tvcg.2024.3456326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/19/2024]
Abstract
The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and judgment tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual reasoning and judgment tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits. All materials and analyses are in supplemental materials: https://github.com/CAV-Lab/DKE_supplemental.git.
Collapse
|
2
|
Hu S, Jiang O, Riedmiller J, Bearfield CX. Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:163-173. [PMID: 39250377 DOI: 10.1109/tvcg.2024.3456405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can improve processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. They also preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays. The supplementary materials can be found at https://osf.io/8c95v/.
Collapse
|
3
|
Stokes C, Bearfield CX, Hearst MA. The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:6787-6800. [PMID: 38039168 DOI: 10.1109/tvcg.2023.3338451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2023]
Abstract
This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.
Collapse
|
4
|
Bearfield CX, Stokes C, Lovett A, Franconeri S. What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5097-5110. [PMID: 37792647 DOI: 10.1109/tvcg.2023.3289292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.
Collapse
|
5
|
Burns A, Lee C, On T, Xiong C, Peck E, Mahyar N. From Invisible to Visible: Impacts of Metadata in Communicative Data Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:3427-3443. [PMID: 37015379 DOI: 10.1109/tvcg.2022.3231716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Leaving the context of visualizations invisible can have negative impacts on understanding and transparency. While common wisdom suggests that recontextualizing visualizations with metadata (e.g., disclosing the data source or instructions for decoding the visualizations' encoding) may counter these effects, the impact remains largely unknown. To fill this gap, we conducted two experiments. In Experiment 1, we explored how chart type, topic, and user goal impacted which categories of metadata participants deemed most relevant. We presented 64 participants with four real-world visualizations. For each visualization, participants were given four goals and selected the type of metadata they most wanted from a set of 18 types. Our results indicated that participants were most interested in metadata which explained the visualization's encoding for goals related to understanding and metadata about the source of the data for assessing trustworthiness. In Experiment 2, we explored how these two types of metadata impact transparency, trustworthiness and persuasiveness, information relevance, and understanding. We asked 144 participants to explain the main message of two pairs of visualizations (one with metadata and one without); rate them on scales of transparency and relevance; and then predict the likelihood that they were selected for a presentation to policymakers. Our results suggested that visualizations with metadata were perceived as more thorough than those without metadata, but similarly relevant, accurate, clear, and complete. Additionally, we found that metadata did not impact the accuracy of the information extracted from visualizations, but may have influenced which information participants remembered as important or interesting.
Collapse
|
6
|
Bearfield CX, van Weelden L, Waytz A, Franconeri S. Same Data, Diverging Perspectives: The Power of Visualizations to Elicit Competing Interpretations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2995-3007. [PMID: 38619945 DOI: 10.1109/tvcg.2024.3388515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
People routinely rely on data to make decisions, but the process can be riddled with biases. We show that patterns in data might be noticed first or more strongly, depending on how the data is visually represented or what the viewer finds salient. We also demonstrate that viewer interpretation of data is similar to that of 'ambiguous figures' such that two people looking at the same data can come to different decisions. In our studies, participants read visualizations depicting competitions between two entities, where one has a historical lead (A) but the other has been gaining momentum (B) and predicted a winner, across two chart types and three annotation approaches. They either saw the historical lead as salient and predicted that A would win, or saw the increasing momentum as salient and predicted B to win. These results suggest that decisions can be influenced by both how data are presented and what patterns people find visually salient.
Collapse
|
7
|
Lincoln-Boyea B, Moultrie RR, Biesecker BB, Underwood M, Duparc M, Wheeler AC, Peay HL. Misunderstood terms and concepts identified through user testing of educational materials for fragile X premutation: "Not weak or fragile?". J Genet Couns 2024; 33:341-351. [PMID: 37232511 DOI: 10.1002/jgc4.1725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 04/11/2023] [Accepted: 04/15/2023] [Indexed: 05/27/2023]
Abstract
Complicated genetic mechanisms and unpredictable health risks associated with the FMR1 premutation can result in challenges for patient education when the diagnosis is made in a newborn. From October 15, 2018, to December 10, 2021, North Carolina parents could obtain FMR1 premutation results about their newborns through a voluntary expanded newborn screening research study. The study provided confirmatory testing, parental testing, and genetic counseling. We developed web-based educational materials to augment information about fragile X premutation conveyed by a genetic counselor. Many genetics education materials are developed for the lay population. However, relatively little research is published on how well individuals understand these materials. We conducted three rounds of iterative user testing interviews to help refine web-based educational materials that support understanding and self-paced learning. The participants included 25 parents with a 2-year college degree or less and without a child identified with fragile X syndrome, premutation, or gray-zone allele. Content analysis of interview transcripts resulted in iterative changes and ultimately saturation of findings. Across all rounds of interviews, there were two terms that were commonly misunderstood (fragile and carrier) and two terms that elicited initial misconceptions that were overcome by participants. Many also had difficulty understanding the relationship between fragile X premutation and fragile X syndrome as well as appreciating the implications of having a "fragile X gene." Website layout, formatting, and graphics also influenced comprehension. Despite iterative changes to the content, certain issues with understandability persisted. The findings support the need for user testing to identify misconceptions that may interfere with understanding and using genetic information. Here, we describe a process used to develop and refine evidence-based, understandable parental resources on fragile X premutation. Additionally, we provide recommendations to address ongoing educational challenges and discuss the potential impact of bias on the part of expert content developers.
Collapse
Affiliation(s)
- Beth Lincoln-Boyea
- Genomics, Bioinformatics, and Translational Research Center, RTI International, Research Triangle Park, North Carolina, USA
| | - Rebecca R Moultrie
- Center for Communication Science, RTI International, Research Triangle Park, North Carolina, USA
| | - Barbara B Biesecker
- Genomics, Bioinformatics, and Translational Research Center, RTI International, Research Triangle Park, North Carolina, USA
| | - Marcia Underwood
- Center for Data Science, RTI International, Research Triangle Park, North Carolina, USA
| | - Martin Duparc
- Genomics, Bioinformatics, and Translational Research Center, RTI International, Research Triangle Park, North Carolina, USA
| | - Anne C Wheeler
- Genomics, Bioinformatics, and Translational Research Center, RTI International, Research Triangle Park, North Carolina, USA
| | - Holly L Peay
- Genomics, Bioinformatics, and Translational Research Center, RTI International, Research Triangle Park, North Carolina, USA
| |
Collapse
|
8
|
Gardner SM, Angra A, Harsh JA. Supporting Student Competencies in Graph Reading, Interpretation, Construction, and Evaluation. CBE LIFE SCIENCES EDUCATION 2024; 23:fe1. [PMID: 38100317 PMCID: PMC10956603 DOI: 10.1187/cbe.22-10-0207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 10/23/2023] [Accepted: 10/24/2023] [Indexed: 12/17/2023]
Abstract
Graphs are ubiquitous tools in science that allow one to explore data patterns, design studies, communicate findings, and make claims. This essay is a companion to the online, evidence-based interactive guide intended to help inform instructors' decision-making in how to teach graph reading, interpretation, construction, and evaluation within the discipline of biology. We provide a framework with a focus on six instructional practices that instructors can utilize when designing graphing activities: use data to engage students, teach graphing grounded in the discipline, practice explicit instruction, use real world "messy" data, utilize collaborative work, and emphasize reflection. Each component of this guide is supported by summaries of and links to articles that can inform graphing practices. The guide also contains an instructor checklist that summarizes key points with actionable steps that can guide instructors as they work towards refining and incorporating graphing into their classroom practice and emerging questions in which further empirical studies are warranted.
Collapse
Affiliation(s)
| | - Aakanksha Angra
- University of Minnesota Medical School, Minneapolis, MN 55455
| | - Joseph A. Harsh
- Department of Biology, James Madison University, Harrisonburg, VA 22807
| |
Collapse
|
9
|
Winter B, Marghetis T. Multimodality matters in numerical communication. Front Psychol 2023; 14:1130777. [PMID: 37564312 PMCID: PMC10411739 DOI: 10.3389/fpsyg.2023.1130777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/10/2023] [Indexed: 08/12/2023] Open
Abstract
Modern society depends on numerical information, which must be communicated accurately and effectively. Numerical communication is accomplished in different modalities-speech, writing, sign, gesture, graphs, and in naturally occurring settings it almost always involves more than one modality at once. Yet the modalities of numerical communication are often studied in isolation. Here we argue that, to understand and improve numerical communication, we must take seriously this multimodality. We first discuss each modality on its own terms, identifying their commonalities and differences. We then argue that numerical communication is shaped critically by interactions among modalities. We boil down these interactions to four types: one modality can amplify the message of another; it can direct attention to content from another modality (e.g., using a gesture to guide attention to a relevant aspect of a graph); it can explain another modality (e.g., verbally explaining the meaning of an axis in a graph); and it can reinterpret a modality (e.g., framing an upwards-oriented trend as a bad outcome). We conclude by discussing how a focus on multimodality raises entirely new research questions about numerical communication.
Collapse
Affiliation(s)
- Bodo Winter
- Department of English Language and Linguistics, University of Birmingham, Birmingham, United Kingdom
| | - Tyler Marghetis
- Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
| |
Collapse
|
10
|
Yang L, Xiong C, Wong JK, Wu A, Qu H. Explaining With Examples: Lessons Learned From Crowdsourced Introductory Description of Information Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1638-1650. [PMID: 34780329 DOI: 10.1109/tvcg.2021.3128157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Data visualizations have been increasingly used in oral presentations to communicate data patterns to the general public. Clear verbal introductions of visualizations to explain how to interpret the visually encoded information are essential to convey the takeaways and avoid misunderstandings. We contribute a series of studies to investigate how to effectively introduce visualizations to the audience with varying degrees of visualization literacy. We begin with understanding how people are introducing visualizations. We crowdsource 110 introductions of visualizations and categorize them based on their content and structures. From these crowdsourced introductions, we identify different introduction strategies and generate a set of introductions for evaluation. We conduct experiments to systematically compare the effectiveness of different introduction strategies across four visualizations with 1,080 participants. We find that introductions explaining visual encodings with concrete examples are the most effective. Our study provides both qualitative and quantitative insights into how to construct effective verbal introductions of visualizations in presentations, inspiring further research in data storytelling.
Collapse
|
11
|
Tandon S, Abdul-Rahman A, Borgo R. Measuring Effects of Spatial Visualization and Domain on Visualization Task Performance: A Comparative Study. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:668-678. [PMID: 36166560 DOI: 10.1109/tvcg.2022.3209491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Understanding one's audience is foundational to creating high impact visualization designs. However, individual differences and cognitive abilities influence interactions with information visualization. Different user needs and abilities suggest that an individual's background could influence cognitive performance and interactions with visuals in a systematic way. This study builds on current research in domain-specific visualization and cognition to address if domain and spatial visualization ability combine to affect performance on information visualization tasks. We measure spatial visualization and visual task performance between those with tertiary education and professional profile in business, law & political science, and math & computer science. We conducted an online study with 90 participants using an established psychometric test to assess spatial visualization ability, and bar chart layouts rotated along Cartesian and polar coordinates to assess performance on spatially rotated data. Accuracy and response times varied with domain across chart types and task difficulty. We found that accuracy and time correlate with spatial visualization level, and education in math & computer science can indicate higher spatial visualization. Additionally, we found that motivational differences between domains could contribute to increased levels of accuracy. Our findings indicate discipline not only affects user needs and interactions with data visualization, but also cognitive traits. Our results can advance inclusive practices in visualization design and add to knowledge in domain-specific visual research that can empower designers across disciplines to create effective visualizations.
Collapse
|
12
|
Xiong C, Stokes C, Kim YS, Franconeri S. Seeing What You Believe or Believing What You See? Belief Biases Correlation Estimation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:493-503. [PMID: 36166548 DOI: 10.1109/tvcg.2022.3209405] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
When an analyst or scientist has a belief about how the world works, their thinking can be biased in favor of that belief. Therefore, one bedrock principle of science is to minimize that bias by testing the predictions of one's belief against objective data. But interpreting visualized data is a complex perceptual and cognitive process. Through two crowdsourced experiments, we demonstrate that supposedly objective assessments of the strength of a correlational relationship can be influenced by how strongly a viewer believes in the existence of that relationship. Participants viewed scatterplots depicting a relationship between meaningful variable pairs (e.g., number of environmental regulations and air quality) and estimated their correlations. They also estimated the correlation of the same scatterplots labeled instead with generic 'X' and 'Y' axes. In a separate section, they also reported how strongly they believed there to be a correlation between the meaningful variable pairs. Participants estimated correlations more accurately when they viewed scatterplots labeled with generic axes compared to scatterplots labeled with meaningful variable pairs. Furthermore, when viewers believed that two variables should have a strong relationship, they overestimated correlations between those variables by an r-value of about 0.1. When they believed that the variables should be unrelated, they underestimated the correlations by an r-value of about 0.1. While data visualizations are typically thought to present objective truths to the viewer, these results suggest that existing personal beliefs can bias even objective statistical values people extract from data.
Collapse
|
13
|
Wu A, Deng D, Cheng F, Wu Y, Liu S, Qu H. In Defence of Visual Analytics Systems: Replies to Critics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1026-1036. [PMID: 36179000 DOI: 10.1109/tvcg.2022.3209360] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The last decade has witnessed many visual analytics (VA) systems that make successful applications to wide-ranging domains like urban analytics and explainable AI. However, their research rigor and contributions have been extensively challenged within the visualization community. We come in defence of VA systems by contributing two interview studies for gathering critics and responses to those criticisms. First, we interview 24 researchers to collect criticisms the review comments on their VA work. Through an iterative coding and refinement process, the interview feedback is summarized into a list of 36 common criticisms. Second, we interview 17 researchers to validate our list and collect their responses, thereby discussing implications for defending and improving the scientific values and rigor of VA systems. We highlight that the presented knowledge is deep, extensive, but also imperfect, provocative, and controversial, and thus recommend reading with an inclusive and critical eye. We hope our work can provide thoughts and foundations for conducting VA research and spark discussions to promote the research field forward more rigorously and vibrantly.
Collapse
|
14
|
El-Assady M, Moruzzi C. Which Biases and Reasoning Pitfalls Do Explanations Trigger? Decomposing Communication Processes in Human-AI Interaction. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:11-23. [PMID: 36094981 DOI: 10.1109/mcg.2022.3200328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Collaborative human-AI problem-solving and decision making rely on effective communications between both agents. Such communication processes comprise explanations and interactions between a sender and a receiver. Investigating these dynamics is crucial to avoid miscommunication problems. Hence, in this article, we propose a communication dynamics model, examining the impact of the sender's explanation intention and strategy on the receiver's perception of explanation effects. We further present potential biases and reasoning pitfalls with the aim of contributing to the design of hybrid intelligence systems. Finally, we propose six desiderata for human-centered explainable AI and discuss future research opportunities.
Collapse
|
15
|
Ajani K, Lee E, Xiong C, Knaflic CN, Kemper W, Franconeri S. Declutter and Focus: Empirically Evaluating Design Guidelines for Effective Data Communication. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3351-3364. [PMID: 33760737 DOI: 10.1109/tvcg.2021.3068337] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Data visualization design has a powerful effect on which patterns we see as salient and how quickly we see them. The visualization practitioner community prescribes two popular guidelines for creating clear and efficient visualizations: declutter and focus. The declutter guidelines suggest removing non-critical gridlines, excessive labeling of data values, and color variability to improve aesthetics and to maximize the emphasis on the data relative to the design itself. The focus guidelines for explanatory communication recommend including a clear headline that describes the relevant data pattern, highlighting a subset of relevant data values with a unique color, and connecting those values to written annotations that contextualize them in a broader argument. We evaluated how these recommendations impact recall of the depicted information across cluttered, decluttered, and decluttered+focused designs of six graph topics. Undergraduate students were asked to redraw previously seen visualizations, to recall their topics and main conclusions, and to rate the varied designs on aesthetics, clarity, professionalism, and trustworthiness. Decluttering designs led to higher ratings on professionalism, and adding focus to the design led to higher ratings on aesthetics and clarity. They also showed better memory for the highlighted pattern in the data, as reflected across redrawings of the original visualization and typed free-response conclusions, though we do not know whether these results would generalize beyond our memory-based tasks. The results largely empirically validate the intuitions of visualization designers and practitioners. The stimuli, data, analysis code, and Supplementary Materials are available at https://osf.io/wes9u/.
Collapse
|
16
|
Procopio M, Mosca A, Scheidegger C, Wu E, Chang R. Impact of Cognitive Biases on Progressive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3093-3112. [PMID: 33434132 DOI: 10.1109/tvcg.2021.3051013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Progressive visualization is fast becoming a technique in the visualization community to help users interact with large amounts of data. With progressive visualization, users can examine intermediate results of complex or long running computations, without waiting for the computation to complete. While this has shown to be beneficial to users, recent research has identified potential risks. For example, users may misjudge the uncertainty in the intermediate results and draw incorrect conclusions or see patterns that are not present in the final results. In this article, we conduct a comprehensive set of studies to quantify the advantages and limitations of progressive visualization. Based on a recent report by Micallef et al., we examine four types of cognitive biases that can occur with progressive visualization: uncertainty bias, illusion bias, control bias, and anchoring bias. The results of the studies suggest a cautious but promising use of progressive visualization - while there can be significant savings in task completion time, accuracy can be negatively affected in certain conditions. These findings confirm earlier reports of the benefits and drawbacks of progressive visualization and that continued research into mitigating the effects of cognitive biases is necessary.
Collapse
|
17
|
Niso G, Krol LR, Combrisson E, Dubarry AS, Elliott MA, François C, Héjja-Brichard Y, Herbst SK, Jerbi K, Kovic V, Lehongre K, Luck SJ, Mercier M, Mosher JC, Pavlov YG, Puce A, Schettino A, Schön D, Sinnott-Armstrong W, Somon B, Šoškić A, Styles SJ, Tibon R, Vilas MG, van Vliet M, Chaumon M. Good scientific practice in EEG and MEG research: Progress and perspectives. Neuroimage 2022; 257:119056. [PMID: 35283287 PMCID: PMC11236277 DOI: 10.1016/j.neuroimage.2022.119056] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 02/25/2022] [Accepted: 03/01/2022] [Indexed: 11/22/2022] Open
Abstract
Good scientific practice (GSP) refers to both explicit and implicit rules, recommendations, and guidelines that help scientists to produce work that is of the highest quality at any given time, and to efficiently share that work with the community for further scrutiny or utilization. For experimental research using magneto- and electroencephalography (MEEG), GSP includes specific standards and guidelines for technical competence, which are periodically updated and adapted to new findings. However, GSP also needs to be regularly revisited in a broader light. At the LiveMEEG 2020 conference, a reflection on GSP was fostered that included explicitly documented guidelines and technical advances, but also emphasized intangible GSP: a general awareness of personal, organizational, and societal realities and how they can influence MEEG research. This article provides an extensive report on most of the LiveMEEG contributions and new literature, with the additional aim to synthesize ongoing cultural changes in GSP. It first covers GSP with respect to cognitive biases and logical fallacies, pre-registration as a tool to avoid those and other early pitfalls, and a number of resources to enable collaborative and reproducible research as a general approach to minimize misconceptions. Second, it covers GSP with respect to data acquisition, analysis, reporting, and sharing, including new tools and frameworks to support collaborative work. Finally, GSP is considered in light of ethical implications of MEEG research and the resulting responsibility that scientists have to engage with societal challenges. Considering among other things the benefits of peer review and open access at all stages, the need to coordinate larger international projects, the complexity of MEEG subject matter, and today's prioritization of fairness, privacy, and the environment, we find that current GSP tends to favor collective and cooperative work, for both scientific and for societal reasons.
Collapse
Affiliation(s)
- Guiomar Niso
- Psychological & Brain Sciences, Indiana University, Bloomington, IN, USA; Universidad Politecnica de Madrid and CIBER-BBN, Madrid, Spain
| | - Laurens R Krol
- Neuroadaptive Human-Computer Interaction, Brandenburg University of Technology Cottbus-Senftenberg, Germany
| | - Etienne Combrisson
- Aix-Marseille University, Institut de Neurosciences de la Timone, France
| | | | | | | | - Yseult Héjja-Brichard
- Centre d'Ecologie Fonctionnelle et Evolutive, CNRS, EPHE, IRD, Université Montpellier, Montpellier, France
| | - Sophie K Herbst
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, NeuroSpin center, Université Paris-Saclay, Gif/Yvette, France
| | - Karim Jerbi
- Cognitive and Computational Neuroscience Laboratory, Department of Psychology, University of Montreal, Montreal, QC, Canada; Mila - Quebec Artificial Intelligence Institute, Canada
| | - Vanja Kovic
- Faculty of Philosophy, Laboratory for neurocognition and applied cognition, University of Belgrade, Serbia
| | - Katia Lehongre
- Institut du Cerveau - Paris Brain Institute - ICM, Inserm U 1127, CNRS UMR 7225, APHP, Hôpital de la Pitié Salpêtrière, Sorbonne Université, Centre MEG-EEG, Centre de NeuroImagerie Recherche (CENIR), Paris, France
| | - Steven J Luck
- Center for Mind & Brain, University of California, Davis, CA, USA
| | - Manuel Mercier
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - John C Mosher
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Yuri G Pavlov
- University of Tuebingen, Germany; Ural Federal University, Yekaterinburg, Russia
| | - Aina Puce
- Psychological & Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Antonio Schettino
- Erasmus University Rotterdam, Rotterdam, the Netherland; Institute for Globally Distributed Open Research and Education (IGDORE), Sweden
| | - Daniele Schön
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | | | | | - Anđela Šoškić
- Faculty of Philosophy, Laboratory for neurocognition and applied cognition, University of Belgrade, Serbia; Teacher Education Faculty, University of Belgrade, Serbia
| | - Suzy J Styles
- Psychology, Nanyang Technological University, Singapore; Singapore Institute for Clinical Sciences, A*STAR, Singapore
| | - Roni Tibon
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK; School of Psychology, University of Nottingham, Nottingham, UK
| | - Martina G Vilas
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
| | | | - Maximilien Chaumon
- Institut du Cerveau - Paris Brain Institute - ICM, Inserm U 1127, CNRS UMR 7225, APHP, Hôpital de la Pitié Salpêtrière, Sorbonne Université, Centre MEG-EEG, Centre de NeuroImagerie Recherche (CENIR), Paris, France..
| |
Collapse
|
18
|
Henkin R, Turkay C. Words of Estimative Correlation: Studying Verbalizations of Scatterplots. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1967-1981. [PMID: 32915742 DOI: 10.1109/tvcg.2020.3023537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Natural language and visualization are being increasingly deployed together for supporting data analysis in different ways, from multimodal interaction to enriched data summaries and insights. Yet, researchers still lack systematic knowledge on how viewers verbalize their interpretations of visualizations, and how they interpret verbalizations of visualizations in such contexts. We describe two studies aimed at identifying characteristics of data and charts that are relevant in such tasks. The first study asks participants to verbalize what they see in scatterplots that depict various levels of correlations. The second study then asks participants to choose visualizations that match a given verbal description of correlation. We extract key concepts from responses, organize them in a taxonomy and analyze the categorized responses. We observe that participants use a wide range of vocabulary across all scatterplots, but particular concepts are preferred for higher levels of correlation. A comparison between the studies reveals the ambiguity of some of the concepts. We discuss how the results could inform the design of multimodal representations aligned with the data and analytical tasks, and present a research roadmap to deepen the understanding about visualizations and natural language.
Collapse
|
19
|
Lundgard A, Satyanarayan A. Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1073-1083. [PMID: 34591762 DOI: 10.1109/tvcg.2021.3114770] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Natural language descriptions sometimes accompany visualizations to better communicate and contextualize their insights, and to improve their accessibility for readers with disabilities. However, it is difficult to evaluate the usefulness of these descriptions, and how effectively they improve access to meaningful information, because we have little understanding of the semantic content they convey, and how different readers receive this content. In response, we introduce a conceptual model for the semantic content conveyed by natural language descriptions of visualizations. Developed through a grounded theory analysis of 2,147 sentences, our model spans four levels of semantic content: enumerating visualization construction properties (e.g., marks and encodings); reporting statistical concepts and relations (e.g., extrema and correlations); identifying perceptual and cognitive phenomena (e.g., complex trends and patterns); and elucidating domain-specific insights (e.g., social and political context). To demonstrate how our model can be applied to evaluate the effectiveness of visualization descriptions, we conduct a mixed-methods evaluation with 30 blind and 90 sighted readers, and find that these reader groups differ significantly on which semantic content they rank as most useful. Together, our model and findings suggest that access to meaningful information is strongly reader-specific, and that research in automatic visualization captioning should orient toward descriptions that more richly communicate overall trends and statistics, sensitive to reader preferences. Our work further opens a space of research on natural language as a data interface coequal with visualization.
Collapse
|
20
|
Hall KW, Kouroupis A, Bezerianos A, Szafir DA, Collins C. Professional Differences: A Comparative Study of Visualization Task Performance and Spatial Ability Across Disciplines. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:654-664. [PMID: 34648448 DOI: 10.1109/tvcg.2021.3114805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Problem-driven visualization work is rooted in deeply understanding the data, actors, processes, and workflows of a target domain. However, an individual's personality traits and cognitive abilities may also influence visualization use. Diverse user needs and abilities raise natural questions for specificity in visualization design: Could individuals from different domains exhibit performance differences when using visualizations? Are any systematic variations related to their cognitive abilities? This study bridges domain-specific perspectives on visualization design with those provided by cognition and perception. We measure variations in visualization task performance across chemistry, computer science, and education, and relate these differences to variations in spatial ability. We conducted an online study with over 60 domain experts consisting of tasks related to pie charts, isocontour plots, and 3D scatterplots, and grounded by a well-documented spatial ability test. Task performance (correctness) varied with profession across more complex visualizations (isocontour plots and scatterplots), but not pie charts, a comparatively common visualization. We found that correctness correlates with spatial ability, and the professions differ in terms of spatial ability. These results indicate that domains differ not only in the specifics of their data and tasks, but also in terms of how effectively their constituent members engage with visualizations and their cognitive traits. Analyzing participants' confidence and strategy comments suggests that focusing on performance neglects important nuances, such as differing approaches to engage with even common visualizations and potential skill transference. Our findings offer a fresh perspective on discipline-specific visualization with specific recommendations to help guide visualization design that celebrates the uniqueness of the disciplines and individuals we seek to serve.
Collapse
|
21
|
McColeman CM, Yang F, Brady TF, Franconeri S. Rethinking the Ranks of Visual Channels. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:707-717. [PMID: 34606455 DOI: 10.1109/tvcg.2021.3114684] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Data can be visually represented using visual channels like position, length or luminance. An existing ranking of these visual channels is based on how accurately participants could report the ratio between two depicted values. There is an assumption that this ranking should hold for different tasks and for different numbers of marks. However, there is surprisingly little existing work that tests this assumption, especially given that visually computing ratios is relatively unimportant in real-world visualizations, compared to seeing, remembering, and comparing trends and motifs, across displays that almost universally depict more than two values. To simulate the information extracted from a glance at a visualization, we instead asked participants to immediately reproduce a set of values from memory after they were shown the visualization. These values could be shown in a bar graph (position (bar)), line graph (position (line)), heat map (luminance), bubble chart (area), misaligned bar graph (length), or 'wind map' (angle). With a Bayesian multilevel modeling approach, we show how the rank positions of visual channels shift across different numbers of marks (2, 4 or 8) and for bias, precision, and error measures. The ranking did not hold, even for reproductions of only 2 marks, and the new probabilistic ranking was highly inconsistent for reproductions of different numbers of marks. Other factors besides channel choice had an order of magnitude more influence on performance, such as the number of values in the series (e.g., more marks led to larger errors), or the value of each mark (e.g., small values were systematically overestimated). Every visual channel was worse for displays with 8 marks than 4, consistent with established limits on visual memory. These results point to the need for a body of empirical studies that move beyond two-value ratio judgments as a baseline for reliably ranking the quality of a visual channel, including testing new tasks (detection of trends or motifs), timescales (immediate computation, or later comparison), and the number of values (from a handful, to thousands).
Collapse
|
22
|
Franconeri SL, Padilla LM, Shah P, Zacks JM, Hullman J. The Science of Visual Data Communication: What Works. Psychol Sci Public Interest 2021; 22:110-161. [PMID: 34907835 DOI: 10.1177/15291006211051956] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Effectively designed data visualizations allow viewers to use their powerful visual systems to understand patterns in data across science, education, health, and public policy. But ineffectively designed visualizations can cause confusion, misunderstanding, or even distrust-especially among viewers with low graphical literacy. We review research-backed guidelines for creating effective and intuitive visualizations oriented toward communicating data to students, coworkers, and the general public. We describe how the visual system can quickly extract broad statistics from a display, whereas poorly designed displays can lead to misperceptions and illusions. Extracting global statistics is fast, but comparing between subsets of values is slow. Effective graphics avoid taxing working memory, guide attention, and respect familiar conventions. Data visualizations can play a critical role in teaching and communication, provided that designers tailor those visualizations to their audience.
Collapse
Affiliation(s)
| | - Lace M Padilla
- Department of Cognitive and Information Sciences, University of California, Merced
| | - Priti Shah
- Department of Psychology, University of Michigan
| | - Jeffrey M Zacks
- Department of Psychological & Brain Sciences, Washington University in St. Louis
| | | |
Collapse
|
23
|
Schloss KB, Leggon Z, Lessard L. Semantic Discriminability for Visual Communication. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1022-1031. [PMID: 33104512 DOI: 10.1109/tvcg.2020.3030434] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
To interpret information visualizations, observers must determine how visual features map onto concepts. First and foremost, this ability depends on perceptual discriminability; observers must be able to see the difference between different colors for those colors to communicate different meanings. However, the ability to interpret visualizations also depends on semantic discriminability, the degree to which observers can infer a unique mapping between visual features and concepts, based on the visual features and concepts alone (i.e., without help from verbal cues such as legends or labels). Previous evidence suggested that observers were better at interpreting encoding systems that maximized semantic discriminability (maximizing association strength between assigned colors and concepts while minimizing association strength between unassigned colors and concepts), compared to a system that only maximized color-concept association strength. However, increasing semantic discriminability also resulted in increased perceptual distance, so it is unclear which factor was responsible for improved performance. In the present study, we conducted two experiments that tested for independent effects of semantic distance and perceptual distance on semantic discriminability of bar graph data visualizations. Perceptual distance was large enough to ensure colors were more than just noticeably different. We found that increasing semantic distance improved performance, independent of variation in perceptual distance, and when these two factors were uncorrelated, responses were dominated by semantic distance. These results have implications for navigating trade-offs in color palette design optimization for visual communication.
Collapse
|
24
|
A review of uncertainty visualization errors: Working memory as an explanatory theory. PSYCHOLOGY OF LEARNING AND MOTIVATION 2021. [DOI: 10.1016/bs.plm.2021.03.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
25
|
Liao MR, Anderson BA. Reward learning biases the direction of saccades. Cognition 2019; 196:104145. [PMID: 31770659 DOI: 10.1016/j.cognition.2019.104145] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 10/14/2019] [Accepted: 11/16/2019] [Indexed: 01/22/2023]
Abstract
The role of associative reward learning in guiding feature-based attention and spatial attention is well established. However, no studies have looked at the extent to which reward learning can modulate the direction of saccades during visual search. Here, we introduced a novel reward learning paradigm to examine whether reward-associated directions of eye movements can modulate performance in different visual search tasks. Participants had to fixate a peripheral target before fixating one of four disks that subsequently appeared in each cardinal position. This was followed by reward feedback contingent upon the direction chosen, where one direction consistently yielded a high reward. Thus, reward was tied to the direction of saccades rather than the absolute location of the stimulus fixated. Participants selected the target in the high-value direction on the majority of trials, demonstrating robust learning of the task contingencies. In an untimed visual foraging task that followed, which was performed in extinction, initial saccades were reliably biased in the previously rewarded-associated direction. In a second experiment, following the same training procedure, eye movements in the previously high-value direction were facilitated in a saccade-to-target task. Our findings suggest that rewarding directional eye movements biases oculomotor search patterns in a manner that is robust to extinction and generalizes across stimuli and task.
Collapse
|