1
|
Pfänder J, Altay S. Spotting false news and doubting true news: a systematic review and meta-analysis of news judgements. Nat Hum Behav 2025; 9:688-699. [PMID: 39984640 PMCID: PMC12018262 DOI: 10.1038/s41562-024-02086-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 11/06/2024] [Indexed: 02/23/2025]
Abstract
How good are people at judging the veracity of news? We conducted a systematic literature review and pre-registered meta-analysis of 303 effect sizes from 67 experimental articles evaluating accuracy ratings of true and fact-checked false news (NParticipants = 194,438 from 40 countries across 6 continents). We found that people rated true news as more accurate than false news (Cohen's d = 1.12 [1.01, 1.22]) and were better at rating false news as false than at rating true news as true (Cohen's d = 0.32 [0.24, 0.39]). In other words, participants were able to discern true from false news and erred on the side of skepticism rather than credulity. We found no evidence that the political concordance of the news had an effect on discernment, but participants were more skeptical of politically discordant news (Cohen's d = 0.78 [0.62, 0.94]). These findings lend support to crowdsourced fact-checking initiatives and suggest that, to improve discernment, there is more room to increase the acceptance of true news than to reduce the acceptance of fact-checked false news.
Collapse
Affiliation(s)
- Jan Pfänder
- Institut Jean Nicod, Département d'études cognitives, ENS, EHESS, PSL University, CNRS, Paris, France
| | - Sacha Altay
- Department of Political Science, University of Zurich, Zürich, Switzerland.
| |
Collapse
|
2
|
Kim J, Wang Z, Shi H, Ling HK, Evans J. Differential impact from individual versus collective misinformation tagging on the diversity of Twitter (X) information engagement and mobility. Nat Commun 2025; 16:973. [PMID: 39856045 PMCID: PMC11760358 DOI: 10.1038/s41467-025-55868-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 01/02/2025] [Indexed: 01/27/2025] Open
Abstract
Fears about the destabilizing impact of misinformation online have motivated individuals and platforms to respond. Individuals have increasingly challenged others' online claims with fact-checks in pursuit of a healthier information ecosystem and to break down echo chambers of self-reinforcing opinion. Using Twitter (now X) data, here we show the consequences of individual misinformation tagging: tagged posters had explored novel political information and expanded topical interests immediately prior, but being tagged caused posters to retreat into information bubbles. These unintended consequences were softened by a collective verification system for misinformation moderation. In Twitter's new feature, Community Notes, misinformation tagging was peer-reviewed by other fact-checkers before revelation to the poster. With collective misinformation tagging, posters were less likely to retreat from diverse information engagement. Detailed comparison demonstrated differences in toxicity, sentiment, readability, and delay in individual versus collective misinformation tagging messages. These findings provide evidence for differential impacts from individual versus collective moderation strategies on the diversity of information engagement and mobility across the information ecosystem.
Collapse
Affiliation(s)
- Junsol Kim
- Department of Sociology, University of Chicago, Chicago, IL, USA
| | - Zhao Wang
- Computational Social Science, University of Chicago, Chicago, IL, USA
| | - Haohan Shi
- School of Communication, Northwestern University, Evanston, IL, USA
| | - Hsin-Keng Ling
- Department of Sociology, University of Michigan, Ann Arbor, MI, USA
| | - James Evans
- Department of Sociology, University of Chicago, Chicago, IL, USA.
- Computational Social Science, University of Chicago, Chicago, IL, USA.
- Santa Fe Institute, Santa Fe, NM, USA.
| |
Collapse
|
3
|
McLoughlin KL, Brady WJ, Goolsbee A, Kaiser B, Klonick K, Crockett MJ. Misinformation exploits outrage to spread online. Science 2024; 386:991-996. [PMID: 39607912 DOI: 10.1126/science.adl2829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 07/19/2024] [Accepted: 10/21/2024] [Indexed: 11/30/2024]
Abstract
We tested a hypothesis that misinformation exploits outrage to spread online, examining generalizability across multiple platforms, time periods, and classifications of misinformation. Outrage is highly engaging and need not be accurate to achieve its communicative goals, making it an attractive signal to embed in misinformation. In eight studies that used US data from Facebook (1,063,298 links) and Twitter (44,529 tweets, 24,007 users) and two behavioral experiments (1475 participants), we show that (i) misinformation sources evoke more outrage than do trustworthy sources; (ii) outrage facilitates the sharing of misinformation at least as strongly as sharing of trustworthy news; and (iii) users are more willing to share outrage-evoking misinformation without reading it first. Consequently, outrage-evoking misinformation may be difficult to mitigate with interventions that assume users want to share accurate information.
Collapse
Affiliation(s)
- Killian L McLoughlin
- Department of Psychology, Princeton University, Princeton, NJ, USA
- School of Public and International Affairs, Princeton University, Princeton, NJ, USA
| | - William J Brady
- Kellogg School of Management, Northwestern University, Evanston, IL, USA
| | - Aden Goolsbee
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Ben Kaiser
- Center for Information Technology Policy, Princeton University, Princeton, NJ, USA
| | - Kate Klonick
- School of Law, St. John's University, Queens, NY, USA
- Information Society Project, Yale University, New Haven, CT, USA
- Brookings Institution, Washington, DC, USA
- Berkman Klein Center, Harvard University, Cambridge, MA, USA
| | - M J Crockett
- Department of Psychology, Princeton University, Princeton, NJ, USA
- Center for Information Technology Policy, Princeton University, Princeton, NJ, USA
- University Center for Human Values, Princeton University, Princeton, NJ, USA
| |
Collapse
|
4
|
Jia C, Lee AY, Moore RC, Decatur CHS, Liu SX, Hancock JT. Collaboration, crowdsourcing, and misinformation. PNAS NEXUS 2024; 3:pgae434. [PMID: 39430219 PMCID: PMC11488513 DOI: 10.1093/pnasnexus/pgae434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 06/30/2024] [Indexed: 10/22/2024]
Abstract
One of humanity's greatest strengths lies in our ability to collaborate to achieve more than we can alone. Just as collaboration can be an important strength, humankind's inability to detect deception is one of our greatest weaknesses. Recently, our struggles with deception detection have been the subject of scholarly and public attention with the rise and spread of misinformation online, which threatens public health and civic society. Fortunately, prior work indicates that going beyond the individual can ameliorate weaknesses in deception detection by promoting active discussion or by harnessing the "wisdom of crowds." Can group collaboration similarly enhance our ability to recognize online misinformation? We conducted a lab experiment where participants assessed the veracity of credible news and misinformation on social media either as an actively collaborating group or while working alone. Our results suggest that collaborative groups were more accurate than individuals at detecting false posts, but not more accurate than a majority-based simulated group, suggesting that "wisdom of crowds" is the more efficient method for identifying misinformation. Our findings reorient research and policy from focusing on the individual to approaches that rely on crowdsourcing or potentially on collaboration in addressing the problem of misinformation.
Collapse
Affiliation(s)
- Chenyan Jia
- College of Arts, Media and Design and Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, USA
- Department of Communication, Stanford University, Stanford, CA 94305, USA
| | - Angela Yuson Lee
- Department of Communication, Stanford University, Stanford, CA 94305, USA
| | - Ryan C Moore
- Department of Communication, Stanford University, Stanford, CA 94305, USA
| | | | - Sunny Xun Liu
- Department of Communication, Stanford University, Stanford, CA 94305, USA
| | - Jeffrey T Hancock
- Department of Communication, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
5
|
Globig LK, Sharot T. Considering information-sharing motives to reduce misinformation. Curr Opin Psychol 2024; 59:101852. [PMID: 39163810 DOI: 10.1016/j.copsyc.2024.101852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 07/15/2024] [Accepted: 07/29/2024] [Indexed: 08/22/2024]
Abstract
Misinformation has risen in recent years, negatively affecting domains ranging from politics to health. To curb the spread of misinformation it is useful to consider why, how, and when people decide to share information. Here we suggest that information-sharing decisions are value-based choices, in which sharers strive to maximize rewards and minimize losses to themselves and/or others. These outcomes can be tangible, in the form of monetary rewards or losses, or intangible, in the form of social feedback. On social media platforms these rewards and losses are not clearly tied to the accuracy of information shared. Thus, sharers have little incentive to avoid disseminating misinformation. Based on this framework, we propose ways to nudge sharers to prioritize accuracy during information-sharing.
Collapse
Affiliation(s)
- Laura K Globig
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, UK; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Psychology, New York University, NY, NY, USA.
| | - Tali Sharot
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, UK; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
6
|
Tokita CK, Aslett K, Godel WP, Sanderson Z, Tucker JA, Nagler J, Persily N, Bonneau R. Measuring receptivity to misinformation at scale on a social media platform. PNAS NEXUS 2024; 3:pgae396. [PMID: 39381645 PMCID: PMC11460357 DOI: 10.1093/pnasnexus/pgae396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 08/14/2024] [Indexed: 10/10/2024]
Abstract
Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation's initial spread. Our paper provides a more precise estimate of misinformation's impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.
Collapse
Affiliation(s)
- Christopher K Tokita
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, NJ 08544, USA
| | - Kevin Aslett
- Center for Social Media and Politics, New York University, New York, NY 10012, USA
- School of Politics, Security, and International Affairs, University of Central Florida, Orlando, FL 32816, USA
| | - William P Godel
- Center for Social Media and Politics, New York University, New York, NY 10012, USA
| | - Zeve Sanderson
- Center for Social Media and Politics, New York University, New York, NY 10012, USA
| | - Joshua A Tucker
- Center for Social Media and Politics, New York University, New York, NY 10012, USA
- Department of Politics, New York University, New York, NY 10012, USA
| | - Jonathan Nagler
- Center for Social Media and Politics, New York University, New York, NY 10012, USA
- Department of Politics, New York University, New York, NY 10012, USA
| | - Nathaniel Persily
- Stanford University Law School, Stanford University, Palo Alto, CA 94305, USA
| | - Richard Bonneau
- Center for Social Media and Politics, New York University, New York, NY 10012, USA
- Prescient Design, a Genentech accelerator, New York, NY 10010, USA
| |
Collapse
|
7
|
Mosleh M, Yang Q, Zaman T, Pennycook G, Rand DG. Differences in misinformation sharing can lead to politically asymmetric sanctions. Nature 2024; 634:609-616. [PMID: 39358507 PMCID: PMC11485227 DOI: 10.1038/s41586-024-07942-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 08/13/2024] [Indexed: 10/04/2024]
Abstract
In response to intense pressure, technology companies have enacted policies to combat misinformation1-4. The enforcement of these policies has, however, led to technology companies being regularly accused of political bias5-7. We argue that differential sharing of misinformation by people identifying with different political groups8-15 could lead to political asymmetries in enforcement, even by unbiased policies. We first analysed 9,000 politically active Twitter users during the US 2020 presidential election. Although users estimated to be pro-Trump/conservative were indeed substantially more likely to be suspended than those estimated to be pro-Biden/liberal, users who were pro-Trump/conservative also shared far more links to various sets of low-quality news sites-even when news quality was determined by politically balanced groups of laypeople, or groups of only Republican laypeople-and had higher estimated likelihoods of being bots. We find similar associations between stated or inferred conservatism and low-quality news sharing (on the basis of both expert and politically balanced layperson ratings) in 7 other datasets of sharing from Twitter, Facebook and survey experiments, spanning 2016 to 2023 and including data from 16 different countries. Thus, even under politically neutral anti-misinformation policies, political asymmetries in enforcement should be expected. Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation policies.
Collapse
Affiliation(s)
- Mohsen Mosleh
- Oxford Internet Institute, University of Oxford, Oxford, UK
- Management Department, University of Exeter Business School, Exeter, UK
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Qi Yang
- Initiative on the Digital Economy, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tauhid Zaman
- Yale School of Management, Yale University, New Haven, CT, USA
| | | | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Initiative on the Digital Economy, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
8
|
Barrera-Lemarchand F, Balenzuela P, Bahrami B, Deroy O, Navajas J. Promoting Erroneous Divergent Opinions Increases the Wisdom of Crowds. Psychol Sci 2024; 35:872-886. [PMID: 38865591 DOI: 10.1177/09567976241252138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024] Open
Abstract
The aggregation of many lay judgments generates surprisingly accurate estimates. This phenomenon, called the "wisdom of crowds," has been demonstrated in domains such as medical decision-making and financial forecasting. Previous research identified two factors driving this effect: the accuracy of individual assessments and the diversity of opinions. Most available strategies to enhance the wisdom of crowds have focused on improving individual accuracy while neglecting the potential of increasing opinion diversity. Here, we study a complementary approach to reduce collective error by promoting erroneous divergent opinions. This strategy proposes to anchor half of the crowd to a small value and the other half to a large value before eliciting and averaging all estimates. Consistent with our mathematical modeling, four experiments (N = 1,362 adults) demonstrated that this method is effective for estimation and forecasting tasks. Beyond the practical implications, these findings offer new theoretical insights into the epistemic value of collective decision-making.
Collapse
Affiliation(s)
- Federico Barrera-Lemarchand
- Laboratorio de Neurociencia, Universidad Torcuato Di Tella
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
- Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires
| | - Pablo Balenzuela
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
- Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires
| | - Bahador Bahrami
- Crowd Cognition Group, Department of General Psychology and Education, Ludwig Maximilian University
- Department of Psychology, Royal Holloway University of London
- Centre for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Ophelia Deroy
- Munich Centre for Neuroscience, Ludwig Maximilian University
- Institute of Philosophy, School of Advanced Study, University of London
- Faculty of Philosophy, Ludwig Maximilian University
| | - Joaquin Navajas
- Laboratorio de Neurociencia, Universidad Torcuato Di Tella
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
- Escuela de Negocios, Universidad Torcuato Di Tella
| |
Collapse
|
9
|
Drolsbach CP, Solovev K, Pröllochs N. Community notes increase trust in fact-checking on social media. PNAS NEXUS 2024; 3:pgae217. [PMID: 38948016 PMCID: PMC11212665 DOI: 10.1093/pnasnexus/pgae217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Accepted: 05/14/2024] [Indexed: 07/02/2024]
Abstract
Community-based fact-checking is a promising approach to fact-check social media content at scale. However, an understanding of whether users trust community fact-checks is missing. Here, we presented n = 1,810 Americans with 36 misleading and nonmisleading social media posts and assessed their trust in different types of fact-checking interventions. Participants were randomly assigned to treatments where misleading content was either accompanied by simple (i.e. context-free) misinformation flags in different formats (expert flags or community flags), or by textual "community notes" explaining why the fact-checked post was misleading. Across both sides of the political spectrum, community notes were perceived as significantly more trustworthy than simple misinformation flags. Our results further suggest that the higher trustworthiness primarily stemmed from the context provided in community notes (i.e. fact-checking explanations) rather than generally higher trust towards community fact-checkers. Community notes also improved the identification of misleading posts. In sum, our work implies that context matters in fact-checking and that community notes might be an effective approach to mitigate trust issues with simple misinformation flags.
Collapse
|
10
|
Howe PDL, Perfors A, Ransom KJ, Walker B, Fay N, Kashima Y, Saletta M, Dong S. Self-certification: A novel method for increasing sharing discernment on social media. PLoS One 2024; 19:e0303025. [PMID: 38861506 PMCID: PMC11166272 DOI: 10.1371/journal.pone.0303025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 04/17/2024] [Indexed: 06/13/2024] Open
Abstract
The proliferation of misinformation on social media platforms has given rise to growing demands for effective intervention strategies that increase sharing discernment (i.e. increase the difference in the probability of sharing true posts relative to the probability of sharing false posts). One suggested method is to encourage users to deliberate on the veracity of the information prior to sharing. However, this strategy is undermined by individuals' propensity to share posts they acknowledge as false. In our study, across three experiments, in a simulated social media environment, participants were shown social media posts and asked whether they wished to share them and, sometimes, whether they believed the posts to be truthful. We observe that requiring users to verify their belief in a news post's truthfulness before sharing it markedly curtails the dissemination of false information. Thus, requiring self-certification increased sharing discernment. Importantly, requiring self-certification didn't hinder users from sharing content they genuinely believed to be true because participants were allowed to share any posts that they indicated were true. We propose self-certification as a method that substantially curbs the spread of misleading content on social media without infringing upon the principle of free speech.
Collapse
Affiliation(s)
| | - Andrew Perfors
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Keith J. Ransom
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Bradley Walker
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
- School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Perth, WA, Australia
| | - Nicolas Fay
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Yoshi Kashima
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Morgan Saletta
- Hunt Laboratory, University of Melbourne, Melbourne, VIC, Australia
| | - Sihan Dong
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
11
|
Allen J, Watts DJ, Rand DG. Quantifying the impact of misinformation and vaccine-skeptical content on Facebook. Science 2024; 384:eadk3451. [PMID: 38815040 DOI: 10.1126/science.adk3451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 04/17/2024] [Indexed: 06/01/2024]
Abstract
Low uptake of the COVID-19 vaccine in the US has been widely attributed to social media misinformation. To evaluate this claim, we introduce a framework combining lab experiments (total N = 18,725), crowdsourcing, and machine learning to estimate the causal effect of 13,206 vaccine-related URLs on the vaccination intentions of US Facebook users (N ≈ 233 million). We estimate that the impact of unflagged content that nonetheless encouraged vaccine skepticism was 46-fold greater than that of misinformation flagged by fact-checkers. Although misinformation reduced predicted vaccination intentions significantly more than unflagged vaccine content when viewed, Facebook users' exposure to flagged content was limited. In contrast, unflagged stories highlighting rare deaths after vaccination were among Facebook's most-viewed stories. Our work emphasizes the need to scrutinize factually accurate but potentially misleading content in addition to outright falsehoods.
Collapse
Affiliation(s)
- Jennifer Allen
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Duncan J Watts
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA, USA
- Operations, Information, and Decisions Department, University of Pennsylvania, Philadelphia, PA, USA
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
12
|
Hasan E, Duhaime E, Trueblood JS. Boosting wisdom of the crowd for medical image annotation using training performance and task features. Cogn Res Princ Implic 2024; 9:31. [PMID: 38763994 PMCID: PMC11102897 DOI: 10.1186/s41235-024-00558-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 04/29/2024] [Indexed: 05/21/2024] Open
Abstract
A crucial bottleneck in medical artificial intelligence (AI) is high-quality labeled medical datasets. In this paper, we test a large variety of wisdom of the crowd algorithms to label medical images that were initially classified by individuals recruited through an app-based platform. Individuals classified skin lesions from the International Skin Lesion Challenge 2018 into 7 different categories. There was a large dispersion in the geographical location, experience, training, and performance of the recruited individuals. We tested several wisdom of the crowd algorithms of varying complexity from a simple unweighted average to more complex Bayesian models that account for individual patterns of errors. Using a switchboard analysis, we observe that the best-performing algorithms rely on selecting top performers, weighting decisions by training accuracy, and take into account the task environment. These algorithms far exceed expert performance. We conclude by discussing the implications of these approaches for the development of medical AI.
Collapse
Affiliation(s)
- Eeshan Hasan
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405-7007, USA.
- Cognitive Science Program, Indiana University, Bloomington, USA.
| | | | - Jennifer S Trueblood
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405-7007, USA.
- Cognitive Science Program, Indiana University, Bloomington, USA.
| |
Collapse
|
13
|
Blanchard MD, Herzog SM, Kämmer JE, Zöller N, Kostopoulou O, Kurvers RHJM. Collective Intelligence Increases Diagnostic Accuracy in a General Practice Setting. Med Decis Making 2024; 44:451-462. [PMID: 38606597 PMCID: PMC11102639 DOI: 10.1177/0272989x241241001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/28/2024] [Indexed: 04/13/2024]
Abstract
BACKGROUND General practitioners (GPs) work in an ill-defined environment where diagnostic errors are prevalent. Previous research indicates that aggregating independent diagnoses can improve diagnostic accuracy in a range of settings. We examined whether aggregating independent diagnoses can also improve diagnostic accuracy for GP decision making. In addition, we investigated the potential benefit of such an approach in combination with a decision support system (DSS). METHODS We simulated virtual groups using data sets from 2 previously published studies. In study 1, 260 GPs independently diagnosed 9 patient cases in a vignette-based study. In study 2, 30 GPs independently diagnosed 12 patient actors in a patient-facing study. In both data sets, GPs provided diagnoses in a control condition and/or DSS condition(s). Each GP's diagnosis, confidence rating, and years of experience were entered into a computer simulation. Virtual groups of varying sizes (range: 3-9) were created, and different collective intelligence rules (plurality, confidence, and seniority) were applied to determine each group's final diagnosis. Diagnostic accuracy was used as the performance measure. RESULTS Aggregating independent diagnoses by weighing them equally (i.e., the plurality rule) substantially outperformed average individual accuracy, and this effect increased with increasing group size. Selecting diagnoses based on confidence only led to marginal improvements, while selecting based on seniority reduced accuracy. Combining the plurality rule with a DSS further boosted performance. DISCUSSION Combining independent diagnoses may substantially improve a GP's diagnostic accuracy and subsequent patient outcomes. This approach did, however, not improve accuracy in all patient cases. Therefore, future work should focus on uncovering the conditions under which collective intelligence is most beneficial in general practice. HIGHLIGHTS We examined whether aggregating independent diagnoses of GPs can improve diagnostic accuracy.Using data sets of 2 previously published studies, we composed virtual groups of GPs and combined their independent diagnoses using 3 collective intelligence rules (plurality, confidence, and seniority).Aggregating independent diagnoses by weighing them equally substantially outperformed average individual GP accuracy, and this effect increased with increasing group size.Combining independent diagnoses may substantially improve GP's diagnostic accuracy and subsequent patient outcomes.
Collapse
Affiliation(s)
| | | | - Juliane E. Kämmer
- Department of Social and Communication Psychology, Institute for Psychology, University of Goettingen, Germany
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Nikolas Zöller
- Max Planck Institute for Human Development, Berlin, Germany
| | - Olga Kostopoulou
- Institute for Global Health Innovation, Imperial College London, UK
| | | |
Collapse
|
14
|
Martel C, Rathje S, Clark CJ, Pennycook G, Van Bavel JJ, Rand DG, van der Linden S. On the Efficacy of Accuracy Prompts Across Partisan Lines: An Adversarial Collaboration. Psychol Sci 2024; 35:435-450. [PMID: 38506937 DOI: 10.1177/09567976241232905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2024] Open
Abstract
The spread of misinformation is a pressing societal challenge. Prior work shows that shifting attention to accuracy increases the quality of people's news-sharing decisions. However, researchers disagree on whether accuracy-prompt interventions work for U.S. Republicans/conservatives and whether partisanship moderates the effect. In this preregistered adversarial collaboration, we tested this question using a multiverse meta-analysis (k = 21; N = 27,828). In all 70 models, accuracy prompts improved sharing discernment among Republicans/conservatives. We observed significant partisan moderation for single-headline "evaluation" treatments (a critical test for one research team) such that the effect was stronger among Democrats than Republicans. However, this moderation was not consistently robust across different operationalizations of ideology/partisanship, exclusion criteria, or treatment type. Overall, we observed significant partisan moderation in 50% of specifications (all of which were considered critical for the other team). We discuss the conditions under which moderation is observed and offer interpretations.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology
| | | | - Cory J Clark
- The Wharton School, University of Pennsylvania
- School of Arts and Sciences, University of Pennsylvania
| | | | | | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | | |
Collapse
|
15
|
Stewart AJ, Arechar AA, Rand DG, Plotkin JB. The distorting effects of producer strategies: Why engagement does not reveal consumer preferences for misinformation. Proc Natl Acad Sci U S A 2024; 121:e2315195121. [PMID: 38412133 DOI: 10.1073/pnas.2315195121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 01/17/2024] [Indexed: 02/29/2024] Open
Abstract
A great deal of empirical research has examined who falls for misinformation and why. Here, we introduce a formal game-theoretic model of engagement with news stories that captures the strategic interplay between (mis)information consumers and producers. A key insight from the model is that observed patterns of engagement do not necessarily reflect the preferences of consumers. This is because producers seeking to promote misinformation can use strategies that lead moderately inattentive readers to engage more with false stories than true ones-even when readers prefer more accurate over less accurate information. We then empirically test people's preferences for accuracy in the news. In three studies, we find that people strongly prefer to click and share news they perceive as more accurate-both in a general population sample, and in a sample of users recruited through Twitter who had actually shared links to misinformation sites online. Despite this preference for accurate news-and consistent with the predictions of our model-we find markedly different engagement patterns for articles from misinformation versus mainstream news sites. Using 1,000 headlines from 20 misinformation and 20 mainstream news sites, we compare Facebook engagement data with 20,000 accuracy ratings collected in a survey experiment. Engagement with a headline is negatively correlated with perceived accuracy for misinformation sites, but positively correlated with perceived accuracy for mainstream sites. Taken together, these theoretical and empirical results suggest that consumer preferences cannot be straightforwardly inferred from empirical patterns of engagement.
Collapse
Affiliation(s)
- Alexander J Stewart
- School of Mathematics and Statistics, University of St Andrews, St Andrews KY16 9SS, United Kingdom
| | - Antonio A Arechar
- División de Economía, Center for Research and Teaching in Economics, Centro de Investigación y Docencia Económicas, Aguascalientes, MX 20314
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Joshua B Plotkin
- Department of Biology, University of Pennsylvania, Philadelphia, PA 19104
| |
Collapse
|
16
|
Martel C, Allen J, Pennycook G, Rand DG. Crowds Can Effectively Identify Misinformation at Scale. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:477-488. [PMID: 37594056 DOI: 10.1177/17456916231190388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
Identifying successful approaches for reducing the belief and spread of online misinformation is of great importance. Social media companies currently rely largely on professional fact-checking as their primary mechanism for identifying falsehoods. However, professional fact-checking has notable limitations regarding coverage and speed. In this article, we summarize research suggesting that the "wisdom of crowds" can be harnessed successfully to help identify misinformation at scale. Despite potential concerns about the abilities of laypeople to assess information quality, recent evidence demonstrates that aggregating judgments of groups of laypeople, or crowds, can effectively identify low-quality news sources and inaccurate news posts: Crowd ratings are strongly correlated with fact-checker ratings across a variety of studies using different designs, stimulus sets, and subject pools. We connect these experimental findings with recent attempts to deploy crowdsourced fact-checking in the field, and we close with recommendations and future directions for translating crowdsourced ratings into effective interventions.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology
| | - Jennifer Allen
- Sloan School of Management, Massachusetts Institute of Technology
| | | | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology
| |
Collapse
|
17
|
Aslett K, Sanderson Z, Godel W, Persily N, Nagler J, Tucker JA. Online searches to evaluate misinformation can increase its perceived veracity. Nature 2024; 625:548-556. [PMID: 38123685 PMCID: PMC10794132 DOI: 10.1038/s41586-023-06883-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 11/20/2023] [Indexed: 12/23/2023]
Abstract
Considerable scholarly attention has been paid to understanding belief in online misinformation1,2, with a particular focus on social networks. However, the dominant role of search engines in the information environment remains underexplored, even though the use of online search to evaluate the veracity of information is a central component of media literacy interventions3-5. Although conventional wisdom suggests that searching online when evaluating misinformation would reduce belief in it, there is little empirical evidence to evaluate this claim. Here, across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them. To shed light on this relationship, we combine survey data with digital trace data collected using a custom browser extension. We find that the search effect is concentrated among individuals for whom search engines return lower-quality information. Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources. We also find consistent evidence that searching online to evaluate news increases belief in true news from low-quality sources, but inconsistent evidence that it increases belief in true news from mainstream sources. Our findings highlight the need for media literacy programmes to ground their recommendations in empirically tested strategies and for search engines to invest in solutions to the challenges identified here.
Collapse
Affiliation(s)
- Kevin Aslett
- School of Politics, Security and International Affairs, University of Central Florida, Orlando, FL, USA.
| | - Zeve Sanderson
- Center for Social Media and Politics, New York University, New York, NY, USA
| | - William Godel
- Center for Social Media and Politics, New York University, New York, NY, USA
| | - Nathaniel Persily
- Stanford University Law School, Stanford University, Stanford, CA, USA
| | - Jonathan Nagler
- Center for Social Media and Politics, New York University, New York, NY, USA
- Wilf Family Department of Politics, New York University, New York, NY, USA
| | - Joshua A Tucker
- Center for Social Media and Politics, New York University, New York, NY, USA
- Wilf Family Department of Politics, New York University, New York, NY, USA
| |
Collapse
|
18
|
Lin H, Lasser J, Lewandowsky S, Cole R, Gully A, Rand DG, Pennycook G. High level of correspondence across different news domain quality rating sets. PNAS NEXUS 2023; 2:pgad286. [PMID: 37719749 PMCID: PMC10500312 DOI: 10.1093/pnasnexus/pgad286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 08/24/2023] [Indexed: 09/19/2023]
Abstract
One widely used approach for quantifying misinformation consumption and sharing is to evaluate the quality of the news domains that a user interacts with. However, different media organizations and fact-checkers have produced different sets of news domain quality ratings, raising questions about the reliability of these ratings. In this study, we compared six sets of expert ratings and found that they generally correlated highly with one another. We then created a comprehensive set of domain ratings for use by the research community (github.com/hauselin/domain-quality-ratings), leveraging an ensemble "wisdom of experts" approach. To do so, we performed imputation together with principal component analysis to generate a set of aggregate ratings. The resulting rating set comprises 11,520 domains-the most extensive coverage to date-and correlates well with other rating sets that have more limited coverage. Together, these results suggest that experts generally agree on the relative quality of news domains, and the aggregate ratings that we generate offer a powerful research tool for evaluating the quality of news consumed or shared and the efficacy of misinformation interventions.
Collapse
Affiliation(s)
- Hause Lin
- Hill/Levene Schools of Business, University of Regina, 3737 Wascana Parkway, Regina, SK, S4S 0A2, Canada
- Sloan School of Management, Massachusetts Institute of Technology, 100 Main St, Cambridge, MA 02142, USA
- Department of Psychology, Cornell University, Uris Hall, 211, Tower Rd, Ithaca, NY 14853, USA
| | - Jana Lasser
- Institute for Interactive Systems and Data Science, Graz University of Technology, Inffeldgasse 16C, 8010 Graz, Austria
- Complexity Science Hub Vienna, Josefstädterstraße 39, 1080 Vienna, Austria
| | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, 12a, Priory Road, Bristol BS8 1TU, UK
- School of Psychology, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009, Australia
| | - Rocky Cole
- Jigsaw (Google LLC), 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
| | - Andrew Gully
- Jigsaw (Google LLC), 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, 100 Main St, Cambridge, MA 02142, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar St, Cambridge, MA 02139, USA
| | - Gordon Pennycook
- Hill/Levene Schools of Business, University of Regina, 3737 Wascana Parkway, Regina, SK, S4S 0A2, Canada
- Department of Psychology, Cornell University, Uris Hall, 211, Tower Rd, Ithaca, NY 14853, USA
- Department of Psychology, University of Regina, 3737 Wascana Parkway, Regina, SK S4S 0A2, Canada
| |
Collapse
|
19
|
Arechar AA, Allen J, Berinsky AJ, Cole R, Epstein Z, Garimella K, Gully A, Lu JG, Ross RM, Stagnaro MN, Zhang Y, Pennycook G, Rand DG. Understanding and combatting misinformation across 16 countries on six continents. Nat Hum Behav 2023; 7:1502-1513. [PMID: 37386111 DOI: 10.1038/s41562-023-01641-6] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 05/24/2023] [Indexed: 07/01/2023]
Abstract
The spread of misinformation online is a global problem that requires global solutions. To that end, we conducted an experiment in 16 countries across 6 continents (N = 34,286; 676,605 observations) to investigate predictors of susceptibility to misinformation about COVID-19, and interventions to combat the spread of this misinformation. In every country, participants with a more analytic cognitive style and stronger accuracy-related motivations were better at discerning truth from falsehood; valuing democracy was also associated with greater truth discernment, whereas endorsement of individual responsibility over government support was negatively associated with truth discernment in most countries. Subtly prompting people to think about accuracy had a generally positive effect on the veracity of news that people were willing to share across countries, as did minimal digital literacy tips. Finally, aggregating the ratings of our non-expert participants was able to differentiate true from false headlines with high accuracy in all countries via the 'wisdom of crowds'. The consistent patterns we observe suggest that the psychological factors underlying the misinformation challenge are similar across different regional settings, and that similar solutions may be broadly effective.
Collapse
Affiliation(s)
- Antonio A Arechar
- Center for Research and Teaching in Economics (CIDE), Aguascalientes, Mexico
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Centre for Decision Research and Experimental Economics, University of Nottingham, Nottingham, UK
| | - Jennifer Allen
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Adam J Berinsky
- Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Ziv Epstein
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | | | - Jackson G Lu
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Robert M Ross
- Department of Philosophy, Macquarie University, Sydney, New South Wales, Australia
| | - Michael N Stagnaro
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Yunhao Zhang
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Gordon Pennycook
- Hill/Levene Schools of Business, University of Regina, Regina, Saskatchewan, Canada.
- Department of Psychology, University of Regina, Regina, Saskatchewan, Canada.
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
20
|
Luvembe AM, Li W, Li S, Liu F, Xu G. Dual emotion based fake news detection: A deep attention-weight update approach. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2023.103354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
21
|
Globig LK, Holtz N, Sharot T. Changing the incentive structure of social media platforms to halt the spread of misinformation. eLife 2023; 12:85767. [PMID: 37278047 DOI: 10.7554/elife.85767] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 04/21/2023] [Indexed: 06/07/2023] Open
Abstract
The powerful allure of social media platforms has been attributed to the human need for social rewards. Here, we demonstrate that the spread of misinformation on such platforms is facilitated by existing social 'carrots' (e.g., 'likes') and 'sticks' (e.g., 'dislikes') that are dissociated from the veracity of the information shared. Testing 951 participants over six experiments, we show that a slight change to the incentive structure of social media platforms, such that social rewards and punishments are contingent on information veracity, produces a considerable increase in the discernment of shared information. Namely, an increase in the proportion of true information shared relative to the proportion of false information shared. Computational modeling (i.e., drift-diffusion models) revealed the underlying mechanism of this effect is associated with an increase in the weight participants assign to evidence consistent with discerning behavior. The results offer evidence for an intervention that could be adopted to reduce misinformation spread, which in turn could reduce violence, vaccine hesitancy and political polarization, without reducing engagement.
Collapse
Affiliation(s)
- Laura K Globig
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, United Kingdom
- The Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Nora Holtz
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, United Kingdom
- The Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
| | - Tali Sharot
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, United Kingdom
- The Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
22
|
Modell SM, Ponte AH, Director HR, Pettersen SK, Kardia SLR, Goltz HH. Breast Cancer Prevention Misinformation on Pinterest: One Side of a Thick Coin. Am J Public Health 2023; 113:e1-e2. [PMID: 36791357 PMCID: PMC9932377 DOI: 10.2105/ajph.2022.307203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/10/2022] [Indexed: 02/17/2023]
Affiliation(s)
- Stephen M Modell
- Stephen M. Modell and Sharon L. R. Kardia are with the Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor. Amy H. Ponte is with Genedu Health Solutions, Beaufort, SC. Haley R. Director is with the Departments of Health Policy and Management and Human Genetics, University of Pittsburgh School of Public Health, Pittsburgh, PA. Samantha K. Pettersen is with the Association for Molecular Pathology, Rockville, MD. Heather Honoré Goltz is with the Department of Criminal Justice and Social Work, College of Public Service, University of Houston-Downtown, Houston, TX
| | - Amy H Ponte
- Stephen M. Modell and Sharon L. R. Kardia are with the Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor. Amy H. Ponte is with Genedu Health Solutions, Beaufort, SC. Haley R. Director is with the Departments of Health Policy and Management and Human Genetics, University of Pittsburgh School of Public Health, Pittsburgh, PA. Samantha K. Pettersen is with the Association for Molecular Pathology, Rockville, MD. Heather Honoré Goltz is with the Department of Criminal Justice and Social Work, College of Public Service, University of Houston-Downtown, Houston, TX
| | - Haley R Director
- Stephen M. Modell and Sharon L. R. Kardia are with the Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor. Amy H. Ponte is with Genedu Health Solutions, Beaufort, SC. Haley R. Director is with the Departments of Health Policy and Management and Human Genetics, University of Pittsburgh School of Public Health, Pittsburgh, PA. Samantha K. Pettersen is with the Association for Molecular Pathology, Rockville, MD. Heather Honoré Goltz is with the Department of Criminal Justice and Social Work, College of Public Service, University of Houston-Downtown, Houston, TX
| | - Samantha K Pettersen
- Stephen M. Modell and Sharon L. R. Kardia are with the Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor. Amy H. Ponte is with Genedu Health Solutions, Beaufort, SC. Haley R. Director is with the Departments of Health Policy and Management and Human Genetics, University of Pittsburgh School of Public Health, Pittsburgh, PA. Samantha K. Pettersen is with the Association for Molecular Pathology, Rockville, MD. Heather Honoré Goltz is with the Department of Criminal Justice and Social Work, College of Public Service, University of Houston-Downtown, Houston, TX
| | - Sharon L R Kardia
- Stephen M. Modell and Sharon L. R. Kardia are with the Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor. Amy H. Ponte is with Genedu Health Solutions, Beaufort, SC. Haley R. Director is with the Departments of Health Policy and Management and Human Genetics, University of Pittsburgh School of Public Health, Pittsburgh, PA. Samantha K. Pettersen is with the Association for Molecular Pathology, Rockville, MD. Heather Honoré Goltz is with the Department of Criminal Justice and Social Work, College of Public Service, University of Houston-Downtown, Houston, TX
| | - Heather Honoré Goltz
- Stephen M. Modell and Sharon L. R. Kardia are with the Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor. Amy H. Ponte is with Genedu Health Solutions, Beaufort, SC. Haley R. Director is with the Departments of Health Policy and Management and Human Genetics, University of Pittsburgh School of Public Health, Pittsburgh, PA. Samantha K. Pettersen is with the Association for Molecular Pathology, Rockville, MD. Heather Honoré Goltz is with the Department of Criminal Justice and Social Work, College of Public Service, University of Houston-Downtown, Houston, TX
| |
Collapse
|
23
|
Realtime user ratings as a strategy for combatting misinformation: an experimental study. Sci Rep 2023; 13:1626. [PMID: 36709398 PMCID: PMC9884269 DOI: 10.1038/s41598-023-28597-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 01/20/2023] [Indexed: 01/29/2023] Open
Abstract
Because fact-checking takes time, verdicts are usually reached after a message has gone viral and interventions can have only limited effect. A new approach recently proposed in scholarship and piloted on online platforms is to harness the wisdom of the crowd by enabling recipients of an online message to attach veracity assessments to it. The intention is to allow poor initial crowd reception to temper belief in and further spread of misinformation. We study this approach by letting 4000 subjects in 80 experimental bipartisan communities sequentially rate the veracity of informational messages. We find that in well-mixed communities, the public display of earlier veracity ratings indeed enhances the correct classification of true and false messages by subsequent users. However, crowd intelligence backfires when false information is sequentially rated in ideologically segregated communities. This happens because early raters' ideological bias, which is aligned with a message, influences later raters' assessments away from the truth. These results suggest that network segregation poses an important problem for community misinformation detection systems that must be accounted for in the design of such systems.
Collapse
|
24
|
Chen S, Xiao L, Kumar A. Spread of misinformation on social media: What contributes to it and how to combat it. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
25
|
Chen YP, Chen YY, Yang KC, Lai F, Huang CH, Chen YN, Tu YC. The Prevalence and Impact of Fake News on COVID-19 Vaccination in Taiwan: Retrospective Study of Digital Media. J Med Internet Res 2022; 24:e36830. [PMID: 35380546 PMCID: PMC9045486 DOI: 10.2196/36830] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/02/2022] [Accepted: 04/04/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Vaccination is an important intervention to prevent the incidence and spread of serious diseases. Many factors including information obtained from the internet influence individuals' decisions to vaccinate. Misinformation is a critical issue and can be hard to detect, although it can change people's minds, opinions, and decisions. The impact of misinformation on public health and vaccination hesitancy is well documented, but little research has been conducted on the relationship between the size of the population reached by misinformation and the vaccination decisions made by that population. A number of fact-checking services are available on the web, including the Islander news analysis system, a free web service that provides individuals with real-time judgment on web news. In this study, we used such services to estimate the amount of fake news available and used Google Trends levels to model the spread of fake news. We quantified this relationship using official public data on COVID-19 vaccination in Taiwan. OBJECTIVE In this study, we aimed to quantify the impact of the magnitude of the propagation of fake news on vaccination decisions. METHODS We collected public data about COVID-19 infections and vaccination from Taiwan's official website and estimated the popularity of searches using Google Trends. We indirectly collected news from 26 digital media sources, using the news database of the Islander system. This system crawls the internet in real time, analyzes the news, and stores it. The incitement and suspicion scores of the Islander system were used to objectively judge news, and a fake news percentage variable was produced. We used multivariable linear regression, chi-square tests, and the Johnson-Neyman procedure to analyze this relationship, using weekly data. RESULTS A total of 791,183 news items were obtained over 43 weeks in 2021. There was a significant increase in the proportion of fake news in 11 of the 26 media sources during the public vaccination stage. The regression model revealed a positive adjusted coefficient (β=0.98, P=.002) of vaccine availability on the following week's vaccination doses, and a negative adjusted coefficient (β=-3.21, P=.04) of the interaction term on the fake news percentage with the Google Trends level. The Johnson-Neiman plot of the adjusted effect for the interaction term showed that the Google Trends level had a significant negative adjustment effect on vaccination doses for the following week when the proportion of fake news exceeded 39.3%. CONCLUSIONS There was a significant relationship between the amount of fake news to which the population was exposed and the number of vaccination doses administered. Reducing the amount of fake news and increasing public immunity to misinformation will be critical to maintain public health in the internet age.
Collapse
Affiliation(s)
- Yen-Pin Chen
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.,Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Yi-Ying Chen
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | | | - Feipei Lai
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.,Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Chien-Hua Huang
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Yun-Nung Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | | |
Collapse
|
26
|
Panizza F, Ronzani P, Martini C, Mattavelli S, Morisseau T, Motterlini M. Lateral reading and monetary incentives to spot disinformation about science. Sci Rep 2022; 12:5678. [PMID: 35383208 PMCID: PMC8981191 DOI: 10.1038/s41598-022-09168-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/15/2022] [Indexed: 11/21/2022] Open
Abstract
Disinformation about science can impose enormous economic and public health burdens. A recently proposed strategy to help online users recognise false content is to follow the techniques of professional fact checkers, such as looking for information on other websites (lateral reading) and looking beyond the first results suggested by search engines (click restraint). In two preregistered online experiments (N = 5387), we simulated a social media environment and tested two interventions, one in the form of a pop-up meant to advise participants to follow such techniques, the other based on monetary incentives. We measured participants’ ability to identify whether information was scientifically valid or invalid. Analysis of participants’ search style reveals that both monetary incentives and pop-up increased the use of fact-checking strategies. Monetary incentives were overall effective in increasing accuracy, whereas the pop-up worked when the source of information was unknown. Pop-up and incentives, when used together, produced a cumulative effect on accuracy. We suggest that monetary incentives enhance content relevance, and could be combined with fact-checking techniques to counteract disinformation.
Collapse
Affiliation(s)
- Folco Panizza
- Molecular Mind Laboratory, IMT School for Advanced Studies Lucca, Lucca , Italy. .,Centre for Applied and Experimental Epistemology, Vita-Salute San Raffaele University, Cesano Maderno, Italy.
| | - Piero Ronzani
- Centre for Applied and Experimental Epistemology, Vita-Salute San Raffaele University, Cesano Maderno, Italy
| | - Carlo Martini
- Centre for Applied and Experimental Epistemology, Vita-Salute San Raffaele University, Cesano Maderno, Italy.,TINT - Centre for Philosophy of Social Science, Department of Political and Economic Studies, University of Helsinki, Helsinki, Finland
| | | | - Tiffany Morisseau
- Université de Paris and Université Gustave Eiffel, LaPEA, Boulogne-Billancourt, France.,Strane Innovation, Gif-sur-Yvette, France
| | - Matteo Motterlini
- Centre for Applied and Experimental Epistemology, Vita-Salute San Raffaele University, Cesano Maderno, Italy
| |
Collapse
|
27
|
Pennycook G, Rand DG. Nudging Social Media toward Accuracy. THE ANNALS OF THE AMERICAN ACADEMY OF POLITICAL AND SOCIAL SCIENCE 2022; 700:152-164. [PMID: 35558818 PMCID: PMC9082967 DOI: 10.1177/00027162221092342] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
A meaningful portion of online misinformation sharing is likely attributable to Internet users failing to consider accuracy when deciding what to share. As a result, simply redirecting attention to the concept of accuracy can increase sharing discernment. Here we discuss the importance of accuracy and describe a limited-attention utility model that is based on a theory about inattention to accuracy on social media. We review research that shows how a simple nudge or prompt that shifts attention to accuracy increases the quality of news that people share (typically by decreasing the sharing of false content), and then discuss outstanding questions relating to accuracy nudges, including the need for more work relating to persistence and habituation as well as the dearth of cross-cultural research on these topics. We also make several recommendations for policy-makers and social media companies for how to implement accuracy nudges.
Collapse
|
28
|
Groh M, Epstein Z, Firestone C, Picard R. Deepfake detection by human crowds, machines, and machine-informed crowds. Proc Natl Acad Sci U S A 2022; 119:e2110013119. [PMID: 34969837 PMCID: PMC8740705 DOI: 10.1073/pnas.2110013119] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/25/2021] [Indexed: 12/18/2022] Open
Abstract
The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model's prediction are more accurate than either alone, but inaccurate model predictions often decrease participants' accuracy. To probe the relative strengths and weaknesses of humans and machines as detectors of deepfakes, we examine human and machine performance across video-level features, and we evaluate the impact of preregistered randomized interventions on deepfake detection. We find that manipulations designed to disrupt visual processing of faces hinder human participants' performance while mostly not affecting the model's performance, suggesting a role for specialized cognitive capacities in explaining human deepfake detection performance.
Collapse
Affiliation(s)
- Matthew Groh
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139;
| | - Ziv Epstein
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218
| | - Rosalind Picard
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
29
|
Van Lange PAM, Rand DG. Human Cooperation and the Crises of Climate Change, COVID-19, and Misinformation. Annu Rev Psychol 2021; 73:379-402. [PMID: 34339612 DOI: 10.1146/annurev-psych-020821-110044] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Contemporary society is facing many social dilemmas-including climate change, COVID-19, and misinformation-characterized by a conflict between short-term self-interest and longer-term collective interest. The climate crisis requires paying costs today to benefit distant others (and oneself) in the future. The COVID-19 crisis requires the less vulnerable to pay costs to benefit the more vulnerable in the face of great uncertainty. The misinformation crisis requires investing effort to assess truth and abstain from spreading attractive falsehoods. Addressing these crises requires an understanding of human cooperation. To that end, we present (a) an overview of mechanisms for the evolution of cooperation, including mechanisms based on similarity and interaction; (b) a discussion of how reputation can incentivize cooperation via conditional cooperation and signaling; and (c) a review of social preferences that undergird the proximate psychology of cooperation, including positive regard for others, parochialism, and egalitarianism. We discuss the three focal crises facing our society through the lens of cooperation, emphasizing how cooperation research can inform our efforts to address them. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Paul A M Van Lange
- Department of Experimental and Applied Psychology, and Institute for Brain and Behavior Amsterdam (iBBA), Vrije Universiteit Amsterdam, 1081 BT Amsterdam, The Netherlands;
| | - David G Rand
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02138, USA;
| |
Collapse
|