1
|
Irwin D, Mandel DR. Communicating uncertainty in national security intelligence: Expert and nonexpert interpretations of and preferences for verbal and numeric formats. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2023; 43:943-957. [PMID: 35994518 DOI: 10.1111/risa.14009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 04/19/2022] [Accepted: 05/05/2022] [Indexed: 06/15/2023]
Abstract
Organizations in several domains including national security intelligence communicate judgments under uncertainty using verbal probabilities (e.g., likely) instead of numeric probabilities (e.g., 75% chance), despite research indicating that the former have variable meanings across individuals. In the intelligence domain, uncertainty is also communicated using terms such as low, moderate, or high to describe the analyst's confidence level. However, little research has examined how intelligence professionals interpret these terms and whether they prefer them to numeric uncertainty quantifiers. In two experiments (N = 481 and 624, respectively), uncertainty communication preferences of expert (n = 41 intelligence analysts in Experiment 1) and nonexpert intelligence consumers were elicited. We examined which format participants judged to be more informative and simpler to process. We further tested whether participants treated verbal probability and confidence terms as independent constructs and whether participants provided coherent numeric probability translations of verbal probabilities. Results showed that although most nonexperts favored the numeric format, experts were about equally split, and most participants in both samples regarded the numeric format as more informative. Experts and nonexperts consistently conflated probability and confidence. For instance, confidence intervals inferred from verbal confidence terms had a greater effect on the location of the estimate than the width of the estimate, contrary to normative expectation. Approximately one-fourth of experts and over one-half of nonexperts provided incoherent numeric probability translations for the terms likely and unlikely when the elicitation of best estimates and lower and upper bounds were briefly spaced by intervening tasks.
Collapse
Affiliation(s)
| | - David R Mandel
- Intelligence, Influence and Collaboration Section, Defence Research and Development Canada, Toronto, ON, Canada
| |
Collapse
|
2
|
Mandel DR, Irwin D, Dhami MK, Budescu DV. Meta‐informational cue inconsistency and judgment of information accuracy: Spotlight on intelligence analysis. JOURNAL OF BEHAVIORAL DECISION MAKING 2022. [DOI: 10.1002/bdm.2307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- David R. Mandel
- Intelligence, Influence, and Collaboration Section Defence Research and Development Canada Toronto Ontario Canada
| | | | | | - David V. Budescu
- Department of Psychology Fordham University New York City New York USA
| |
Collapse
|
3
|
Cerón-Guzmán JA, Tetteroo D, Hu J, Markopoulos P. “Not Sure Sharing Does Anything Extra for Me”: Understanding How People with Cardiovascular Disease Conceptualize Sharing Personal Health Data with Peers. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19159508. [PMID: 35954863 PMCID: PMC9368547 DOI: 10.3390/ijerph19159508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 07/26/2022] [Accepted: 07/29/2022] [Indexed: 11/16/2022]
Abstract
As people deal with cardiovascular disease (CVD), they are to self-monitor routinely and be aware of complications and the corresponding course of action. Engaging in these self-care behaviors is conducive to gaining knowledge of health status. Even so, knowledge of the self may be insufficient in making sense of chronic conditions. In constructing a new normal after health-related life disruptions, people often turn to peers (others facing similar health issues) and share personal health information with each other. Although health information-sharing behavior is well-documented, it remains underexplored what attitudes individuals with chronic conditions, such as CVD, have toward disclosing personal health data to peers and exploring those of others with similar conditions. We surveyed 39 people who reported being diagnosed with CVD to understand how they conceptualize sharing personal health data with their peers. By analyzing qualitative survey data thematically, we found that respondents expressed themselves as uncertain about the benefits of interacting with peers in such a manner. At the same time, they recognized an opportunity to learn new ideas to enhance CVD self-care in mutual data sharing. We also report participants’ analytical orientation toward this sort of data sharing herein and elaborate on what sharing a range of personal health data could mean. In light of the existing literature, this study unpacks the notion of sharing in a different population/pathology and with more nuance, particularly by distinguishing between disclosing one’s data and exploring others’.
Collapse
|
4
|
Feliciani T, Morreau M, Luo J, Lucas P, Shankar K. Designing grant-review panels for better funding decisions: Lessons from an empirically calibrated simulation model. RESEARCH POLICY 2022. [DOI: 10.1016/j.respol.2021.104467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
5
|
Dhami MK, Mandel DR. Communicating uncertainty using words and numbers. Trends Cogn Sci 2022; 26:514-526. [DOI: 10.1016/j.tics.2022.03.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 02/27/2022] [Accepted: 03/08/2022] [Indexed: 11/24/2022]
|
6
|
Mandel DR, Irwin D. On measuring agreement with numerically bounded linguistic probability schemes: A re-analysis of data from Wintle, Fraser, Wills, Nicholson, and Fidler (2019). PLoS One 2021; 16:e0248424. [PMID: 33735197 PMCID: PMC7971511 DOI: 10.1371/journal.pone.0248424] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 01/31/2021] [Indexed: 11/30/2022] Open
Abstract
Across a wide range of domains, experts make probabilistic judgments under conditions of uncertainty to support decision-making. These judgments are often conveyed using linguistic expressions (e.g., x is likely). Seeking to foster shared understanding of these expressions between senders and receivers, the US intelligence community implemented a communication standard that prescribes a set of probability terms and assigns each term an equivalent numerical probability range. In an earlier PLOS ONE article, [1] tested whether access to the standard improves shared understanding and also explored the efficacy of various enhanced presentation formats. Notably, they found that embedding numeric equivalents in text (e.g., x is likely [55–80%]) substantially outperformed the status-quo approach in terms of the percentage overlap between participants’ interpretations of linguistic probabilities (defined in terms of the numeric range equivalents they provided for each term) and the numeric ranges in the standard. These results have important prescriptive implications, yet Wintle et al.’s percentage overlap measure of agreement may be viewed as unfairly punitive because it penalizes individuals for being more precise than the stipulated guidelines even when the individuals’ interpretations fall perfectly within the stipulated ranges. Arguably, subjects’ within-range precision is a positive attribute and should not be penalized in scoring interpretive agreement. Accordingly, in the present article, we reanalyzed Wintle et al.’s data using an alternative measure of percentage overlap that does not penalize in-range precision. Using the alternative measure, we find that percentage overlap is substantially elevated across conditions. More importantly, however, the effects of presentation format and probability level are highly consistent with the original study. By removing the ambiguity caused by Wintle et al.’s unduly punitive measure of agreement, these findings buttress Wintle et al.’s original claim that the methods currently used by intelligence organizations are ineffective at coordinating the meaning of uncertainty expressions between intelligence producers and intelligence consumers. Future studies examining agreement between senders and receivers are also encouraged to reflect carefully on the most appropriate measures of agreement to employ in their experiments and to explicate the bases for their methodological choices.
Collapse
Affiliation(s)
- David R. Mandel
- Intelligence, Influence and Collaboration Section, Toronto Research Centre, Defence Research and Development Canada, Toronto, Ontario, Canada
- * E-mail:
| | - Daniel Irwin
- Department of National Defence, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Facilitating sender-receiver agreement in communicated probabilities: Is it best to use words, numbers or both? JUDGMENT AND DECISION MAKING 2021. [DOI: 10.1017/s1930297500008603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractOrganizations tasked with communicating expert judgments couched in uncertainty often use numerically bounded linguistic probability schemes to standardize the meaning of verbal probabilities. An experiment (N = 1,202) was conducted to ascertain whether agreement with such a scheme was better when probabilities were presented verbally, numerically or in a combined “verbal + numeric” format. Across three agreement measures, the numeric and combined formats outperformed the verbal format and also yielded better discrimination between low and high probabilities and were less susceptible to the fifty-fifty blip phenomenon. The combined format did not confer any advantage over the purely numeric format. The findings indicate that numerically bounded linguistic probability schemes are an ineffective means of communicating information about probabilities to others and they call into question recommendations for use of the combined format for delivering such schemes.
Collapse
|
8
|
Mandel DR, Dhami MK, Tran S, Irwin D. Arithmetic computation with probability words and numbers. JOURNAL OF BEHAVIORAL DECISION MAKING 2021. [DOI: 10.1002/bdm.2232] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- David R. Mandel
- Intelligence, Influence and Collaboration Section, Toronto Research Centre Defence Research and Development Canada Toronto Ontario Canada
| | | | - Serena Tran
- Intelligence, Influence and Collaboration Section, Toronto Research Centre Defence Research and Development Canada Toronto Ontario Canada
| | - Daniel Irwin
- Department of National Defence Ottawa Ontario Canada
| |
Collapse
|
9
|
Coherence of probability judgments from uncertain evidence: Does ACH help? JUDGMENT AND DECISION MAKING 2020. [DOI: 10.1017/s1930297500008159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractAlthough the Analysis of Competing Hypotheses method (ACH) is a structured analytic technique promoted in several intelligence communities for improving the quality of probabilistic hypothesis testing, it has received little empirical testing. Whereas previous evaluations have used numerical evidence assumed to be perfectly accurate, in the present experiment we tested the effectiveness of ACH using a judgment task that presented participants with uncertain evidence varying in source reliability and information credibility. Participants (N = 227) assigned probabilities to two alternative hypotheses across six cases that systematically varied case features. Across multiple tests of coherence, the ACH group showed no advantage over a no-technique control group. Both groups showed evidence of subadditivity, unreliability, and overly conservative non-Bayesian judgments. The ACH group also showed pseudo-diagnostic weighting of evidence. The findings do not support the claim that ACH is effective at improving probabilistic judgment.
Collapse
|
10
|
Mansour JK. The Confidence-Accuracy Relationship Using Scale Versus Other Methods of Assessing Confidence. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2020. [DOI: 10.1016/j.jarmac.2020.01.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
11
|
Abstract
The science of judgment and decision making involves three interrelated forms of research: analysis of the decisions people face, description of their natural responses, and interventions meant to help them do better. After briefly introducing the field's intellectual foundations, we review recent basic research into the three core elements of decision making: judgment, or how people predict the outcomes that will follow possible choices; preference, or how people weigh those outcomes; and choice, or how people combine judgments and preferences to reach a decision. We then review research into two potential sources of behavioral heterogeneity: individual differences in decision-making competence and developmental changes across the life span. Next, we illustrate applications intended to improve individual and organizational decision making in health, public policy, intelligence analysis, and risk management. We emphasize the potential value of coupling analytical and behavioral research and having basic and applied research inform one another.
Collapse
Affiliation(s)
- Baruch Fischhoff
- Department of Engineering and Public Policy, and Institute for Politics and Strategy, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
| | - Stephen B. Broomell
- Department of Social and Decision Sciences, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
| |
Collapse
|
12
|
Cultivating credibility with probability words and numbers. JUDGMENT AND DECISION MAKING 2019. [DOI: 10.1017/s1930297500005404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractRecent research suggests that communicating probabilities numerically rather than verbally benefits forecasters’ credibility. In two experiments, we tested the reproducibility of this communication-format effect. The effect was replicated under comparable conditions (low-probability, inaccurate forecasts), but it was reversed for low-probability accurate forecasts and eliminated for high-probability forecasts. Experiment 2 further showed that verbal probabilities convey implicit recommendations more clearly than probability information, whereas numeric probabilities do the opposite. Descriptively, the findings indicate that the effect of probability words versus numbers on credibility depends on how these formats convey directionality differently, how directionality implies recommendations even when none are explicitly given, and how such recommendations correspond with outcomes. Prescriptively, we propose that experts distinguish forecasts from advice, using numeric probabilities for the former and well-reasoned arguments for the latter.
Collapse
|
13
|
Juanchich M, Sirota M. Do people really prefer verbal probabilities? PSYCHOLOGICAL RESEARCH 2019; 84:2325-2338. [PMID: 31250102 DOI: 10.1007/s00426-019-01207-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 06/03/2019] [Indexed: 11/24/2022]
Abstract
When people communicate uncertainty, do they prefer to use words (e.g., "a chance", "possible") or numbers (e.g., "20%", "a 1 in 2 chance")? To answer this question, past research drew from a range of methodologies, yet failed to provide a clear-cut answer. Building on a review of existing methodologies, theoretical accounts and empirical findings, we tested the hypothesis that the preference for a particular format is driven by the variant of uncertainty that people experience. We expected that epistemic uncertainty would be more often communicated in words, whereas distributional uncertainty would be more often communicated in numbers; for the dispositional uncertainty, we expected that an individual's disposition would be more often communicated in words, whereas dispositions from the world would be more often communicated numerically. In three experiments (one oral, two written), participants communicated their uncertainty regarding two outcomes per variants of uncertainty: epistemic, dispositional and distributional. Overall, participants communicated their uncertainty more often in words, but this preference depended on the variants of uncertainty. Participants conveyed their epistemic and dispositional uncertainties more often in words and their distributional uncertainty in numbers (Experiments 1 and 2) but this effect was greatly reduced when the precision of uncertainty was held constant (Experiment 3), pointing out the key role of uncertainty vagueness. We have reviewed the implications of our findings for the existing accounts of format preferences.
Collapse
Affiliation(s)
- Marie Juanchich
- Department of Psychology, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK.
| | - Miroslav Sirota
- Department of Psychology, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK
| |
Collapse
|
14
|
Løhre E, Sobkow A, Hohle SM, Teigen KH. Framing experts' (dis)agreements about uncertain environmental events. JOURNAL OF BEHAVIORAL DECISION MAKING 2019. [DOI: 10.1002/bdm.2132] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Erik Løhre
- Software Engineering DepartmentSimula Research Laboratory Oslo Norway
- Department of PsychologyInland Norway University of Applied Sciences Lillehammer Norway
| | - Agata Sobkow
- Wroclaw Faculty of Psychology, Center for Research on Improving Decision Making (CRIDM)SWPS University of Social Sciences and Humanities Wroclaw Poland
| | | | - Karl Halvor Teigen
- Software Engineering DepartmentSimula Research Laboratory Oslo Norway
- Department of PsychologyUniversity of Oslo Oslo Norway
| |
Collapse
|
15
|
van der Bles AM, van der Linden S, Freeman ALJ, Mitchell J, Galvao AB, Zaval L, Spiegelhalter DJ. Communicating uncertainty about facts, numbers and science. ROYAL SOCIETY OPEN SCIENCE 2019; 6:181870. [PMID: 31218028 PMCID: PMC6549952 DOI: 10.1098/rsos.181870] [Citation(s) in RCA: 117] [Impact Index Per Article: 23.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Accepted: 04/11/2019] [Indexed: 05/20/2023]
Abstract
Uncertainty is an inherent part of knowledge, and yet in an era of contested expertise, many shy away from openly communicating their uncertainty about what they know, fearful of their audience's reaction. But what effect does communication of such epistemic uncertainty have? Empirical research is widely scattered across many disciplines. This interdisciplinary review structures and summarizes current practice and research across domains, combining a statistical and psychological perspective. This informs a framework for uncertainty communication in which we identify three objects of uncertainty-facts, numbers and science-and two levels of uncertainty: direct and indirect. An examination of current practices provides a scale of nine expressions of direct uncertainty. We discuss attempts to codify indirect uncertainty in terms of quality of the underlying evidence. We review the limited literature about the effects of communicating epistemic uncertainty on cognition, affect, trust and decision-making. While there is some evidence that communicating epistemic uncertainty does not necessarily affect audiences negatively, impact can vary between individuals and communication formats. Case studies in economic statistics and climate change illustrate our framework in action. We conclude with advice to guide both communicators and future researchers in this important but so far rather neglected field.
Collapse
Affiliation(s)
- Anne Marthe van der Bles
- Winton Centre for Risk and Evidence Communication, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Cambridge, UK
- Cambridge Social Decision-Making Lab, Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sander van der Linden
- Winton Centre for Risk and Evidence Communication, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Cambridge, UK
- Cambridge Social Decision-Making Lab, Department of Psychology, University of Cambridge, Cambridge, UK
| | - Alexandra L. J. Freeman
- Winton Centre for Risk and Evidence Communication, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Cambridge, UK
| | - James Mitchell
- Warwick Business School, University of Warwick, Coventry, UK
| | - Ana B. Galvao
- Warwick Business School, University of Warwick, Coventry, UK
| | - Lisa Zaval
- Department of Psychology, Columbia University, New York, NY, USA
| | - David J. Spiegelhalter
- Winton Centre for Risk and Evidence Communication, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Cambridge, UK
| |
Collapse
|
16
|
Too soon to tell if the US intelligence community prediction market is more accurate than intelligence reports: Commentary on. JUDGMENT AND DECISION MAKING 2019. [DOI: 10.1017/s1930297500004320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractStastny and Lehner (2018) reported a study comparing the forecast accuracy of a US intelligence community prediction market (ICPM) to traditionally produced intelligence reports. Five analysts unaffiliated with the intelligence reports imputed forecasts from the reports after stating their personal forecasts on the same forecasting questions. The authors claimed that the accuracy of the ICPM was significantly greater than that of the intelligence reports and suggest this may have been due to methods that harness crowd wisdom. However, additional analyses conducted here show that the imputer’s personal forecasts, which were made individually, were as accurate as ICPM forecasts. In fact, their updated personal forecasts (made after reading the intelligence reports) were marginally more accurate than ICPM forecasts. Imputed forecasts are also strongly correlated with the imputers’ personal forecasts, casting doubt on the degree to which the imputation was in fact a reliably inter-subjective assessment of what intelligence reports implied about the forecasting questions. Alternative methods for comparing intelligence community forecasting methods are discussed.
Collapse
|
17
|
Wintle BC, Fraser H, Wills BC, Nicholson AE, Fidler F. Verbal probabilities: Very likely to be somewhat more confusing than numbers. PLoS One 2019; 14:e0213522. [PMID: 30995242 PMCID: PMC6469752 DOI: 10.1371/journal.pone.0213522] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 02/23/2019] [Indexed: 11/20/2022] Open
Abstract
People interpret verbal expressions of probabilities (e.g. 'very likely') in different ways, yet words are commonly preferred to numbers when communicating uncertainty. Simply providing numerical translations alongside reports or text containing verbal probabilities should encourage consistency, but these guidelines are often ignored. In an online experiment with 924 participants, we compared four different formats for presenting verbal probabilities with the numerical guidelines used in the US Intelligence Community Directive (ICD) 203 to see whether any could improve the correspondence between the intended meaning and participants' interpretation ('in-context'). This extends previous work in the domain of climate science. The four experimental conditions we tested were: 1. numerical guidelines bracketed in text, e.g. X is very unlikely (05-20%), 2. click to see the full guidelines table in a new window, 3. numerical guidelines appear in a mouse over tool tip, and 4. no guidelines provided (control). Results indicate that correspondence with the ICD 203 standard is substantially improved only when numerical guidelines are bracketed in text. For this condition, average correspondence was 66%, compared with 32% in the control. We also elicited 'context-free' numerical judgements from participants for each of the seven verbal probability expressions contained in ICD 203 (i.e., we asked participants what range of numbers they, personally, would assign to those expressions), and constructed 'evidence-based lexicons' based on two methods from similar research, 'membership functions' and 'peak values', that reflect our large sample's intuitive translations of the terms. Better aligning the intended and assumed meaning of fuzzy words like 'unlikely' can reduce communication problems between the reporter and receiver of probabilistic information. In turn, this can improve decision making under uncertainty.
Collapse
Affiliation(s)
- Bonnie C. Wintle
- School of BioSciences, University of Melbourne, Melbourne, Australia
| | - Hannah Fraser
- School of BioSciences, University of Melbourne, Melbourne, Australia
| | - Ben C. Wills
- School of Historical and Philosophical Studies, University of Melbourne, Melbourne, Australia
- The Hastings Center, Garrison, NY, United States of America
| | - Ann E. Nicholson
- Faculty of Information Technology, Monash University, Melbourne, Australia
| | - Fiona Fidler
- School of BioSciences, University of Melbourne, Melbourne, Australia
- School of Historical and Philosophical Studies, University of Melbourne, Melbourne, Australia
| |
Collapse
|
18
|
Hart A, Maxim L, Siegrist M, Von Goetz N, da Cruz C, Merten C, Mosbach-Schulz O, Lahaniatis M, Smith A, Hardy A. Guidance on Communication of Uncertainty in Scientific Assessments. EFSA J 2019; 17:e05520. [PMID: 32626067 PMCID: PMC7292191 DOI: 10.2903/j.efsa.2019.5520] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
This document provides guidance for communicators on how to communicate the various expressions of uncertainty described in EFSA's document: 'Guidance on uncertainty analysis in scientific assessments'. It also contains specific guidance for assessors on how best to report the various expressions of uncertainty. The document provides a template for identifying expressions of uncertainty in scientific assessments and locating the specific guidance for each expression. The guidance is structured according to EFSA's three broadly defined categories of target audience: 'entry', 'informed' and 'technical' levels. Communicators should use the guidance for entry and informed audiences, while assessors should use the guidance for the technical level. The guidance was formulated using evidence from the scientific literature, grey literature and two EFSA research studies, or based on judgement and reasoning where evidence was incomplete or missing. The limitations of the evidence sources inform the recommendations for further research on uncertainty communication.
Collapse
|
19
|
Etienne J, Chirico S, Gunabalasingham T, Jarvis A. Final report: Clear Communications and Uncertainty. ACTA ACUST UNITED AC 2018. [DOI: 10.2903/sp.efsa.2018.en-1412] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
20
|
Taylor BJ, Stevenson M, McDowell M. Communicating risk in dementia care: Survey of health and social care professionals. HEALTH & SOCIAL CARE IN THE COMMUNITY 2018; 26:e291-e303. [PMID: 29226458 DOI: 10.1111/hsc.12519] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/23/2017] [Indexed: 06/07/2023]
Abstract
Supporting people to live at home in line with community care policies requires increasing attention to assessing, communicating and managing risks. There is a challenge in supporting client choices that include risk-taking while demonstrating professional accountability. Risk communication becomes increasingly important with the need to engage clients and families in meaningful shared decision-making. This presents particular challenges in dementia services. This survey of risk communication in dementia care was administered to all health and social care professionals in community dementia services in Northern Ireland: June-September 2016. Of 270 professionals, 70 questionnaires were fully completed, with 55 partial completions. Scores on the Berlin Numeracy Test plus Schwartz items was low-moderate (mean 2.79 out of 7). This study did not find a significant association between numeracy and accurate perceptions of risk likelihoods in practice-based scenarios. Although 86% reported using numeric information in practice (mostly from assessment tools), respondents rarely communicated themselves using numbers. As in other domains, participants' responses were widely variable on numeric estimates of verbal terms for likelihood. In relation to medication side effects, few participants provided responses that were concordant with those in the guidance of the European Union. The risks most commonly encountered in practice were (in rank order): falls, depression, poor personal hygiene, medicines mismanagement, leaving home unsupervised, financial mismanagement, malnutrition, swallowing difficulties, abuse from others, risks to others, home appliance accidents and refusing equipment. Respondents generally overestimated the likelihood of serious harmful events by approximately 10-fold (having a missing person's report filed with the police; having a fall resulting in hospitalisation) and by approximately double (being involved in a car accident; causing a home fire), and with wide variation between respondents. There is potential in icon arrays for communicating risks. Risk literacy among dementia care practitioners needs to be developed.
Collapse
Affiliation(s)
- Brian J Taylor
- Institute for Social Sciences, Ulster University, Northern Ireland, UK
| | - Mabel Stevenson
- School of Sociology & Applied Social Studies, Ulster University, Northern Ireland, UK
| | - Michelle McDowell
- Harding Centre for Risk Literacy, Max Planck Institute, Berlin, Germany
| |
Collapse
|
21
|
Jenkins SC, Harris AJ, Lark R. Understanding ‘Unlikely (20% Likelihood)’ or ‘20% Likelihood (Unlikely)’ Outcomes: The Robustness of the Extremity Effect. JOURNAL OF BEHAVIORAL DECISION MAKING 2018. [DOI: 10.1002/bdm.2072] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Sarah C. Jenkins
- Department of Experimental Psychology; University College London; London UK
| | - Adam J.L. Harris
- Department of Experimental Psychology; University College London; London UK
| | - R.M. Lark
- Environmental Science Centre; British Geological Survey (BGS); Nottingham UK
| |
Collapse
|
22
|
Benford D, Halldorsson T, Jeger MJ, Knutsen HK, More S, Naegeli H, Noteborn H, Ockleford C, Ricci A, Rychen G, Schlatter JR, Silano V, Solecki R, Turck D, Younes M, Craig P, Hart A, Von Goetz N, Koutsoumanis K, Mortensen A, Ossendorp B, Germini A, Martino L, Merten C, Mosbach-Schulz O, Smith A, Hardy A. The principles and methods behind EFSA's Guidance on Uncertainty Analysis in Scientific Assessment. EFSA J 2018; 16:e05122. [PMID: 32625670 PMCID: PMC7009645 DOI: 10.2903/j.efsa.2018.5122] [Citation(s) in RCA: 92] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
To meet the general requirement for transparency in EFSA's work, all its scientific assessments must consider uncertainty. Assessments must say clearly and unambiguously what sources of uncertainty have been identified and what is their impact on the assessment conclusion. This applies to all EFSA's areas, all types of scientific assessment and all types of uncertainty affecting assessment. This current Opinion describes the principles and methods supporting a concise Guidance Document on Uncertainty in EFSA's Scientific Assessment, published separately. These documents do not prescribe specific methods for uncertainty analysis but rather provide a flexible framework within which different methods may be selected, according to the needs of each assessment. Assessors should systematically identify sources of uncertainty, checking each part of their assessment to minimise the risk of overlooking important uncertainties. Uncertainty may be expressed qualitatively or quantitatively. It is neither necessary nor possible to quantify separately every source of uncertainty affecting an assessment. However, assessors should express in quantitative terms the combined effect of as many as possible of identified sources of uncertainty. The guidance describes practical approaches. Uncertainty analysis should be conducted in a flexible, iterative manner, starting at a level appropriate to the assessment and refining the analysis as far as is needed or possible within the time available. The methods and results of the uncertainty analysis should be reported fully and transparently. Every EFSA Panel and Unit applied the draft Guidance to at least one assessment in their work area during a trial period of one year. Experience gained in this period resulted in improved guidance. The Scientific Committee considers that uncertainty analysis will be unconditional for EFSA Panels and staff and must be embedded into scientific assessment in all areas of EFSA's work.
Collapse
|
23
|
Collins PJ, Hahn U. Communicating and reasoning with verbal probability expressions. PSYCHOLOGY OF LEARNING AND MOTIVATION 2018. [DOI: 10.1016/bs.plm.2018.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
24
|
Mandel DR, Barnes A. Geopolitical Forecasting Skill in Strategic Intelligence. JOURNAL OF BEHAVIORAL DECISION MAKING 2017. [DOI: 10.1002/bdm.2055] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|