1
|
Wu M, Sun Y, Jiang S. Adaptive Color Transfer From Images to Terrain Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5538-5552. [PMID: 37440387 DOI: 10.1109/tvcg.2023.3295122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/15/2023]
Abstract
Terrain mapping is not only dedicated to communicating how high or steep a landscape is but can also help to indicate how we feel about a place. However, crafting effective and expressive elevation colors is challenging for both nonexperts and experts. In this article, we present a two-step image-to-terrain color transfer method that can transfer color from arbitrary images to diverse terrain models. First, we present a new image color organization method that organizes discrete, irregular image colors into a continuous, regular color grid that facilitates a series of color operations, such as local and global searching, categorical color selection and sequential color interpolation. Second, we quantify a series of cartographic concerns about elevation color crafting, such as the "lower, higher" principle, color conventions, and aerial perspectives. We also define color similarity between images and terrain visualizations with aesthetic quality. We then mathematically formulate image-to-terrain color transfer as a dual-objective optimization problem and offer a heuristic searching method to solve the problem. Finally, we compare elevation colors from our method with a standard color scheme and a representative color scale generation tool based on four test terrains. The evaluations show that the elevation colors from the proposed method are most effective and that our results are visually favorable. We also showcase that our method can transfer emotion from images to terrain visualizations.
Collapse
|
2
|
Gaba A, Kaufman Z, Cheung J, Shvakel M, Hall KW, Brun Y, Bearfield CX. My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:327-337. [PMID: 37878441 DOI: 10.1109/tvcg.2023.3327192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer "Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?" Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.
Collapse
|
3
|
Soto A, Schoenlein MA, Schloss KB. More of what? Dissociating effects of conceptual and numeric mappings on interpreting colormap data visualizations. Cogn Res Princ Implic 2023; 8:38. [PMID: 37337019 PMCID: PMC10279625 DOI: 10.1186/s41235-023-00482-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/30/2023] [Indexed: 06/21/2023] Open
Abstract
In visual communication, people glean insights about patterns of data by observing visual representations of datasets. Colormap data visualizations ("colormaps") show patterns in datasets by mapping variations in color to variations in magnitude. When people interpret colormaps, they have expectations about how colors map to magnitude, and they are better at interpreting visualizations that align with those expectations. For example, they infer that darker colors map to larger quantities (dark-is-more bias) and colors that are higher on vertically oriented legends map to larger quantities (high-is-more bias). In previous studies, the notion of quantity was straightforward because more of the concept represented (conceptual magnitude) corresponded to larger numeric values (numeric magnitude). However, conceptual and numeric magnitude can conflict, such as using rank order to quantify health-smaller numbers correspond to greater health. Under conflicts, are inferred mappings formed based on the numeric level, the conceptual level, or a combination of both? We addressed this question across five experiments, spanning data domains: alien animals, antibiotic discovery, and public health. Across experiments, the high-is-more bias operated at the conceptual level: colormaps were easier to interpret when larger conceptual magnitude was represented higher on the legend, regardless of numeric magnitude. The dark-is-more bias tended to operate at the conceptual level, but numeric magnitude could interfere, or even dominate, if conceptual magnitude was less salient. These results elucidate factors influencing meanings inferred from visual features and emphasize the need to consider data meaning, not just numbers, when designing visualizations aimed to facilitate visual communication.
Collapse
Affiliation(s)
- Alexis Soto
- Department of Integrative Biology, University of Wisconsin-Madison, 430 Lincoln Drive, Madison, WI, 53706, USA
- Wisconsin Institute for Discovery, University of Wisconsin-Madison, 330 N. Orchard Street, Madison, WI, 53715, USA
| | - Melissa A Schoenlein
- Department of Psychology, University of Wisconsin-Madison, 1202 W. Johnson Street, Madison, WI, 53706, USA
- Wisconsin Institute for Discovery, University of Wisconsin-Madison, 330 N. Orchard Street, Madison, WI, 53715, USA
| | - Karen B Schloss
- Department of Psychology, University of Wisconsin-Madison, 1202 W. Johnson Street, Madison, WI, 53706, USA.
- Wisconsin Institute for Discovery, University of Wisconsin-Madison, 330 N. Orchard Street, Madison, WI, 53715, USA.
| |
Collapse
|
4
|
Schoenlein MA, Campos J, Lande KJ, Lessard L, Schloss KB. Unifying Effects of Direct and Relational Associations for Visual Communication. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:385-395. [PMID: 36173771 DOI: 10.1109/tvcg.2022.3209443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
People have expectations about how colors map to concepts in visualizations, and they are better at interpreting visualizations that match their expectations. Traditionally, studies on these expectations (inferred mappings) distinguished distinct factors relevant for visualizations of categorical vs. continuous information. Studies on categorical information focused on direct associations (e.g., mangos are associated with yellows) whereas studies on continuous information focused on relational associations (e.g., darker colors map to larger quantities; dark-is-more bias). We unite these two areas within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. Observers infer globally optimal assignments by maximizing the "merit," or "goodness," of each possible assignment. Previous work on assignment inference focused on visualizations of categorical information. We extend this approach to visualizations of continuous data by (a) broadening the notion of merit to include relational associations and (b) developing a method for combining multiple (sometimes conflicting) sources of merit to predict people's inferred mappings. We developed and tested our model on data from experiments in which participants interpreted colormap data visualizations, representing fictitious data about environmental concepts (sunshine, shade, wild fire, ocean water, glacial ice). We found both direct and relational associations contribute independently to inferred mappings. These results can be used to optimize visualization design to facilitate visual communication.
Collapse
|
5
|
Hu R, Ye Z, Chen B, van Kaick O, Huang H. Self-Supervised Color-Concept Association via Image Colorization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:247-256. [PMID: 36166543 DOI: 10.1109/tvcg.2022.3209481] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The interpretation of colors in visualizations is facilitated when the assignments between colors and concepts in the visualizations match human's expectations, implying that the colors can be interpreted in a semantic manner. However, manually creating a dataset of suitable associations between colors and concepts for use in visualizations is costly, as such associations would have to be collected from humans for a large variety of concepts. To address the challenge of collecting this data, we introduce a method to extract color-concept associations automatically from a set of concept images. While the state-of-the-art method extracts associations from data with supervised learning, we developed a self-supervised method based on colorization that does not require the preparation of ground truth color-concept associations. Our key insight is that a set of images of a concept should be sufficient for learning color-concept associations, since humans also learn to associate colors to concepts mainly from past visual input. Thus, we propose to use an automatic colorization method to extract statistical models of the color-concept associations that appear in concept images. Specifically, we take a colorization model pre-trained on ImageNet and fine-tune it on the set of images associated with a given concept, to predict pixel-wise probability distributions in Lab color space for the images. Then, we convert the predicted probability distributions into color ratings for a given color library and aggregate them for all the images of a concept to obtain the final color-concept associations. We evaluate our method using four different evaluation metrics and via a user study. Experiments show that, although the state-of-the-art method based on supervised learning with user-provided ratings is more effective at capturing relative associations, our self-supervised method obtains overall better results according to metrics like Earth Mover's Distance (EMD) and Entropy Difference (ED), which are closer to human perception of color distributions.
Collapse
|
6
|
Murthy SK, Griffiths TL, Hawkins RD. Shades of confusion: Lexical uncertainty modulates ad hoc coordination in an interactive communication task. Cognition 2022; 225:105152. [DOI: 10.1016/j.cognition.2022.105152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 02/07/2022] [Accepted: 04/26/2022] [Indexed: 11/03/2022]
|
7
|
Schoenlein MA, Schloss KB. Colour-concept association formation for novel concepts. VISUAL COGNITION 2022. [DOI: 10.1080/13506285.2022.2089418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Melissa A. Schoenlein
- Department of Psychology, University of Wisconsin—Madison, Madison, WI, USA
- Wisconsin Institute for Discovery, University of Wisconsin—Madison, Madison, WI, USA
| | - Karen B. Schloss
- Department of Psychology, University of Wisconsin—Madison, Madison, WI, USA
- Wisconsin Institute for Discovery, University of Wisconsin—Madison, Madison, WI, USA
| |
Collapse
|
8
|
Spence C, Van Doorn G. Visual communication via the design of food and beverage packaging. Cogn Res Princ Implic 2022; 7:42. [PMID: 35551542 PMCID: PMC9098755 DOI: 10.1186/s41235-022-00391-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 04/23/2022] [Indexed: 11/10/2022] Open
Abstract
A rapidly growing body of empirical research has recently started to emerge highlighting the connotative and/or semiotic meanings that consumers typically associate with specific abstract visual design features, such as colours (either when presented individually or in combination), simple shapes/curvilinearity, and the orientation and relative position of those design elements on product packaging. While certain of our affective responses to such basic visual design features appear almost innate, the majority are likely established via the internalization of the statistical regularities of the food and beverage marketplace (i.e. as a result of associative learning), as in the case of round typeface and sweet-tasting products. Researchers continue to document the wide range of crossmodal correspondences that underpin the links between individual visual packaging design features and specific properties of food and drink products (such as their taste, flavour, or healthfulness), and the ways in which marketers are now capitalizing on such understanding to increase sales. This narrative review highlights the further research that is still needed to establish the connotative or symbolic/semiotic meaning(s) of particular combinations of design features (such as coloured stripes in a specific orientation), as opposed to individual cues in national food markets and also, increasingly, cross-culturally in the case of international brands.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Oxford University, Oxford, OX2 6GG, UK.
| | - George Van Doorn
- School of Science, Psychology and Sport, Churchill Campus, Federation University Australia, Churchill, VIC, 3842, Australia.,Health Innovation and Transformation Centre, Mt Helen Campus, Federation University Australia, Ballarat, VIC, 3350, Australia.,Successful Health for At-Risk Populations (SHARP) Research Group, Mt Helen Campus, Federation University Australia, Ballarat, VIC, 3350, Australia
| |
Collapse
|
9
|
Shen Z, Pritchard MJ. Cognitive engagement on social media: A study of the effects of visual cueing in educational videos. J Assoc Inf Sci Technol 2022. [DOI: 10.1002/asi.24630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Zixing Shen
- College of Business New Mexico State University Las Cruces New Mexico USA
| | | |
Collapse
|
10
|
Mukherjee K, Yin B, Sherman BE, Lessard L, Schloss KB. Context Matters: A Theory of Semantic Discriminability for Perceptual Encoding Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:697-706. [PMID: 34587028 DOI: 10.1109/tvcg.2021.3114780] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
People's associations between colors and concepts influence their ability to interpret the meanings of colors in information visualizations. Previous work has suggested such effects are limited to concepts that have strong, specific associations with colors. However, although a concept may not be strongly associated with any colors, its mapping can be disambiguated in the context of other concepts in an encoding system. We articulate this view in semantic discriminability theory, a general framework for understanding conditions determining when people can infer meaning from perceptual features. Semantic discriminability is the degree to which observers can infer a unique mapping between visual features and concepts. Semantic discriminability theory posits that the capacity for semantic discriminability for a set of concepts is constrained by the difference between the feature-concept association distributions across the concepts in the set. We define formal properties of this theory and test its implications in two experiments. The results show that the capacity to produce semantically discriminable colors for sets of concepts was indeed constrained by the statistical distance between color-concept association distributions (Experiment 1). Moreover, people could interpret meanings of colors in bar graphs insofar as the colors were semantically discriminable, even for concepts previously considered "non-colorable" (Experiment 2). The results suggest that colors are more robust for visual communication than previously thought.
Collapse
|