1
|
Jones MI, Pauls SD, Fu F. Containing misinformation: Modeling spatial games of fake news. PNAS Nexus 2024; 3:pgae090. [PMID: 38463039 PMCID: PMC10924450 DOI: 10.1093/pnasnexus/pgae090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 02/13/2024] [Indexed: 03/12/2024]
Abstract
The spread of fake news on social media is a pressing issue. Here, we develop a mathematical model on social networks in which news sharing is modeled as a coordination game. We use this model to study the effect of adding designated individuals who sanction fake news sharers (representing, for example, correction of false claims or public shaming of those who share such claims). By simulating our model on synthetic square lattices and small-world networks, we demonstrate that social network structure allows fake news spreaders to form echo chambers and more than doubles fake news' resistance to distributed sanctioning efforts. We confirm our results are robust to a wide range of coordination and sanctioning payoff parameters as well as initial conditions. Using a Twitter network dataset, we show that sanctioners can help contain fake news when placed strategically. Furthermore, we analytically determine the conditions required for peer sanctioning to be effective, including prevalence and enforcement levels. Our findings have implications for developing mitigation strategies to control misinformation and preserve the integrity of public discourse.
Collapse
Affiliation(s)
- Matthew I Jones
- Sociology Department, Yale University, New Haven, CT 06511, USA
- Mathematics Department, Dartmouth College, Hanover, NH 03755, USA
| | - Scott D Pauls
- Mathematics Department, Dartmouth College, Hanover, NH 03755, USA
| | - Feng Fu
- Mathematics Department, Dartmouth College, Hanover, NH 03755, USA
- Department of Biomedical Data Science, Dartmouth College, Hanover, NH 03756, USA
| |
Collapse
|
2
|
Martel C, Allen J, Pennycook G, Rand DG. Crowds Can Effectively Identify Misinformation at Scale. Perspect Psychol Sci 2024; 19:477-488. [PMID: 37594056 DOI: 10.1177/17456916231190388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
Identifying successful approaches for reducing the belief and spread of online misinformation is of great importance. Social media companies currently rely largely on professional fact-checking as their primary mechanism for identifying falsehoods. However, professional fact-checking has notable limitations regarding coverage and speed. In this article, we summarize research suggesting that the "wisdom of crowds" can be harnessed successfully to help identify misinformation at scale. Despite potential concerns about the abilities of laypeople to assess information quality, recent evidence demonstrates that aggregating judgments of groups of laypeople, or crowds, can effectively identify low-quality news sources and inaccurate news posts: Crowd ratings are strongly correlated with fact-checker ratings across a variety of studies using different designs, stimulus sets, and subject pools. We connect these experimental findings with recent attempts to deploy crowdsourced fact-checking in the field, and we close with recommendations and future directions for translating crowdsourced ratings into effective interventions.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology
| | - Jennifer Allen
- Sloan School of Management, Massachusetts Institute of Technology
| | | | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology
| |
Collapse
|
3
|
Quelle D, Bovet A. The perils and promises of fact-checking with large language models. Front Artif Intell 2024; 7:1341697. [PMID: 38384276 PMCID: PMC10879553 DOI: 10.3389/frai.2024.1341697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 01/22/2024] [Indexed: 02/23/2024] Open
Abstract
Automated fact-checking, using machine learning to verify claims, has grown vital as misinformation spreads beyond human fact-checking capacity. Large language models (LLMs) like GPT-4 are increasingly trusted to write academic papers, lawsuits, and news articles and to verify information, emphasizing their role in discerning truth from falsehood and the importance of being able to verify their outputs. Understanding the capacities and limitations of LLMs in fact-checking tasks is therefore essential for ensuring the health of our information ecosystem. Here, we evaluate the use of LLM agents in fact-checking by having them phrase queries, retrieve contextual data, and make decisions. Importantly, in our framework, agents explain their reasoning and cite the relevant sources from the retrieved context. Our results show the enhanced prowess of LLMs when equipped with contextual information. GPT-4 outperforms GPT-3, but accuracy varies based on query language and claim veracity. While LLMs show promise in fact-checking, caution is essential due to inconsistent accuracy. Our investigation calls for further research, fostering a deeper comprehension of when agents succeed and when they fail.
Collapse
Affiliation(s)
- Dorian Quelle
- Department of Mathematical Modeling and Machine Learning, University of Zurich, Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Zurich, Switzerland
| | - Alexandre Bovet
- Department of Mathematical Modeling and Machine Learning, University of Zurich, Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Zurich, Switzerland
| |
Collapse
|
4
|
Al-Rawi A, Blackwell B, Zemenchik K, Lee K. Twitter Misinformation Discourses About Vaping: Systematic Content Analysis. J Med Internet Res 2023; 25:e49416. [PMID: 37948118 PMCID: PMC10674139 DOI: 10.2196/49416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 08/23/2023] [Accepted: 09/22/2023] [Indexed: 11/12/2023] Open
Abstract
BACKGROUND While there has been substantial analysis of social media content deemed to spread misinformation about electronic nicotine delivery systems use, the strategic use of misinformation accusations to undermine opposing views has received limited attention. OBJECTIVE This study aims to fill this gap by analyzing how social media users discuss the topic of misinformation related to electronic nicotine delivery systems, notably vaping products. Additionally, this study identifies and analyzes the actors commonly blamed for spreading such misinformation and how these claims support both the provaping and antivaping narratives. METHODS Using Twitter's (subsequently rebranded as X) academic application programming interface, we collected tweets referencing #vape and #vaping and keywords associated with fake news and misinformation. This study uses systematic content analysis to analyze the tweets and identify common themes and actors who discuss or possibly spread misinformation. RESULTS This study found that provape users dominate the platform regarding discussions about misinformation about vaping, with provaping tweets being more frequent and having higher overall user engagement. The most common narrative for provape tweets surrounds the conversation of vaping being perceived as safe. On the other hand, the most common topic from the antivape narrative is that vaping is indeed harmful. This study also points to a general distrust in authority figures, with news outlets, public health authorities, and political actors regularly accused of spreading misinformation, with both placing blame. However, specific actors differ depending on their positionalities. The vast number of accusations from provaping advocates is found to shape what is considered misinformation and works to silence other narratives. Additionally, allegations against reliable and proven sources, such as public health authorities, work to discredit assessments about the health impacts, which is detrimental to public health overall for both provaping and antivaping advocates. CONCLUSIONS We conclude that the spread of misinformation and the accusations of misinformation dissemination using terms such as "fact check," "misinformation," "fake news," and "disinformation" have become weaponized and co-opted by provaping actors to delegitimize criticisms about vaping and to increase confusion about the potential health risks. The study discusses the mixed types of impact of vaping on public health for both smokers and nonsmokers. Additionally, we discuss the implications for effective health education and communication about vaping and how misinformation claims can affect evidence-based discourse on Twitter as well as informed vaping decisions.
Collapse
Affiliation(s)
| | | | | | - Kelley Lee
- Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
5
|
Gottlieb E, Baker M, Détienne F. Iranian scientists and French showers: collaborative fact-checking of identity-salient online information. Front Psychol 2023; 14:1295130. [PMID: 38022959 PMCID: PMC10665876 DOI: 10.3389/fpsyg.2023.1295130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
In this study, we investigate what leads people to fact-check online information, how they fact-check such information in practice, how fact-checking affects their judgments about the information's credibility, and how each of the above processes is affected by the salience of the information to readers' cultural identities. Eight pairs of adult participants were recruited from diverse cultural backgrounds to participate online in joint fact-checking of suspect Tweets. To examine their collaborative deliberations we developed a novel experimental design and analytical model. Our analyses indicate that the salience of online information to people's cultural identities influences their decision to fact-check it, that fact-checking deliberations are often non-linear and iterative, that collaborative fact-checking leads people to revise their initial judgments about the credibility of online information, and that when online information is highly salient to people's cultural identities, they apply different standards of credibility when fact-checking it. In conclusion, we propose that cultural identity is an important factor in the fact-checking of online information, and that joint fact-checking of online information by people from diverse cultural backgrounds may have significant potential as an educational tool to reduce people's susceptibility to misinformation.
Collapse
Affiliation(s)
- Eli Gottlieb
- Institut Interdisciplinaire de l'Innovation, Centre National de la Recherche Scientifique, and Télécom Paris, Palaiseau, France
- Graduate School of Education and Human Development, The George Washington University, Washington, DC, United States
| | - Michael Baker
- Institut Interdisciplinaire de l'Innovation, Centre National de la Recherche Scientifique, and Télécom Paris, Palaiseau, France
| | - Françoise Détienne
- Institut Interdisciplinaire de l'Innovation, Centre National de la Recherche Scientifique, and Télécom Paris, Palaiseau, France
| |
Collapse
|
6
|
Kožuh I, Čakš P. Social Media Fact-Checking: The Effects of News Literacy and News Trust on the Intent to Verify Health-Related Information. Healthcare (Basel) 2023; 11:2796. [PMID: 37893870 PMCID: PMC10606871 DOI: 10.3390/healthcare11202796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 10/13/2023] [Accepted: 10/20/2023] [Indexed: 10/29/2023] Open
Abstract
The recent health crisis and the rapid development of Artificial Intelligence have caused misinformation on social media to flourish by becoming more sophisticated and challenging to detect. This calls upon fact-checking and questions users' competencies and attitudes when assessing social media news. Our study provides a model of how fact-checking intent is explained by news literacy and news trust to examine how users behave in the misinformation-prone social media environment. Structural equation modeling was used to examine survey data gathered from social media users. The findings revealed that users' intent to fact-check information in social media news is explained by (1) news literacy, such as the awareness of various techniques used by creators to depict situations about COVID-19; (2) news trust, in terms of the conviction that the news contains all the essential facts; and (3) intent, such as an aim to check information in multiple pieces of news. The presented findings may aid policymakers and practitioners in developing efficient communication strategies for addressing users less prone to fact-checking. Our contribution offers a new understanding of news literacy as a sufficient tool for combating misinformation, which actively equips users with knowledge and an attitude for social media news fact-checking.
Collapse
Affiliation(s)
- Ines Kožuh
- Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia;
| | | |
Collapse
|
7
|
Lin H, Lasser J, Lewandowsky S, Cole R, Gully A, Rand DG, Pennycook G. High level of correspondence across different news domain quality rating sets. PNAS Nexus 2023; 2:pgad286. [PMID: 37719749 PMCID: PMC10500312 DOI: 10.1093/pnasnexus/pgad286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 08/24/2023] [Indexed: 09/19/2023]
Abstract
One widely used approach for quantifying misinformation consumption and sharing is to evaluate the quality of the news domains that a user interacts with. However, different media organizations and fact-checkers have produced different sets of news domain quality ratings, raising questions about the reliability of these ratings. In this study, we compared six sets of expert ratings and found that they generally correlated highly with one another. We then created a comprehensive set of domain ratings for use by the research community (github.com/hauselin/domain-quality-ratings), leveraging an ensemble "wisdom of experts" approach. To do so, we performed imputation together with principal component analysis to generate a set of aggregate ratings. The resulting rating set comprises 11,520 domains-the most extensive coverage to date-and correlates well with other rating sets that have more limited coverage. Together, these results suggest that experts generally agree on the relative quality of news domains, and the aggregate ratings that we generate offer a powerful research tool for evaluating the quality of news consumed or shared and the efficacy of misinformation interventions.
Collapse
Affiliation(s)
- Hause Lin
- Hill/Levene Schools of Business, University of Regina, 3737 Wascana Parkway, Regina, SK, S4S 0A2, Canada
- Sloan School of Management, Massachusetts Institute of Technology, 100 Main St, Cambridge, MA 02142, USA
- Department of Psychology, Cornell University, Uris Hall, 211, Tower Rd, Ithaca, NY 14853, USA
| | - Jana Lasser
- Institute for Interactive Systems and Data Science, Graz University of Technology, Inffeldgasse 16C, 8010 Graz, Austria
- Complexity Science Hub Vienna, Josefstädterstraße 39, 1080 Vienna, Austria
| | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, 12a, Priory Road, Bristol BS8 1TU, UK
- School of Psychology, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009, Australia
| | - Rocky Cole
- Jigsaw (Google LLC), 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
| | - Andrew Gully
- Jigsaw (Google LLC), 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, 100 Main St, Cambridge, MA 02142, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar St, Cambridge, MA 02139, USA
| | - Gordon Pennycook
- Hill/Levene Schools of Business, University of Regina, 3737 Wascana Parkway, Regina, SK, S4S 0A2, Canada
- Department of Psychology, Cornell University, Uris Hall, 211, Tower Rd, Ithaca, NY 14853, USA
- Department of Psychology, University of Regina, 3737 Wascana Parkway, Regina, SK S4S 0A2, Canada
| |
Collapse
|
8
|
Martínez-García L, Ferrer I. Fact-Checking Journalism: A Palliative Against the COVID-19 Infodemic in Ibero-America. Journal Mass Commun Q 2023; 100:264-285. [PMID: 38602932 PMCID: PMC10125874 DOI: 10.1177/10776990231164168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
This study explores how fact-checkers understand information disorder in Ibero-America, in particular the COVID-19 disinformation. We conducted a quantitative content analysis of the LatamChequea Coronavirus alliance database and in-depth interviews with journalists from the network. Evidence found that one of the most prevalent disinformation topics was the government's restrictive measures, threatening to jeopardize the effectiveness of public health campaigns. This, added to disinformation that eroded the trust in the institutions and the press, and the opacity of governments constituted a political crisis in Ibero-America. Under this scenario, fact-checkers created relevant journalistic collaborations and strategies to fight disinformation in the region.
Collapse
|
9
|
Singer JB. Closing the Barn Door? Fact-Checkers as Retroactive Gatekeepers of the COVID-19 "Infodemic". Journal Mass Commun Q 2023; 100:332-353. [PMID: 38602946 PMCID: PMC10119658 DOI: 10.1177/10776990231168599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Based on a study of U.S.-tagged items in a global database of fact-checked statements about the novel coronavirus throughout the first year of the pandemic, this article explores the nature of fact-checkers' "retroactive gatekeeping." This term is introduced here to describe the process of assessing the veracity of information after it has entered the public domain rather than before. Although an overwhelming majority of statements across 16 thematic categories were deemed false and debunked, often repeatedly, misinformation continued to circulate freely and widely.
Collapse
|
10
|
Rivas-de-Roca R, Pérez-Curiel C. Global political leaders during the COVID-19 vaccination: Between propaganda and fact-checking. Politics Life Sci 2023; 42:104-119. [PMID: 37140226 DOI: 10.1017/pls.2023.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The advent of COVID-19 vaccination meant a moment of hope after months of crisis communication. However, the context of disinformation on social media threatened the success of this public health campaign. This study examines how heads of government and fact-checking organizations in four countries managed communications on Twitter about the vaccination. Specifically, we conduct a content analysis of their discourses through the observation of propaganda mechanisms. The research draws on a corpus of words related to the pandemic and vaccines in France, Spain, the United Kingdom, and the United States (n = 2,800). The data were captured for a five-month period (January-May 2021), during which COVID-19 vaccines became available for elderly people. The results show a trend of clearly fallacious communication among the political leaders, based on the tools of emphasis and appeal to emotion. We argue that the political messages about the vaccination mainly used propaganda strategies. These tweets also set, to a certain extent, the agendas of the most relevant fact-checking initiatives in each country.
Collapse
|
11
|
Prike T, Reason R, Ecker UKH, Swire-Thompson B, Lewandowsky S. Would I lie to you? Party affiliation is more important than Brexit in processing political misinformation. R Soc Open Sci 2023; 10:220508. [PMID: 36756068 PMCID: PMC9890089 DOI: 10.1098/rsos.220508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 01/11/2023] [Indexed: 06/18/2023]
Abstract
In recent years, the UK has become divided along two key dimensions: party affiliation and Brexit position. We explored how division along these two dimensions interacts with the correction of political misinformation. Participants saw accurate and inaccurate statements (either balanced or mostly inaccurate) from two politicians from opposing parties but the same Brexit position (Experiment 1), or the same party but opposing Brexit positions (Experiment 2). Replicating previous work, fact-checking statements led participants to update their beliefs, increasing belief after fact affirmations and decreasing belief for corrected misinformation, even for politically aligned material. After receiving fact-checks participants had reduced voting intentions and more negative feelings towards party-aligned politicians (likely due to low baseline support for opposing party politicians). For Brexit alignment, the opposite was found: participants reduced their voting intentions and feelings for opposing (but not aligned) politicians following the fact-checks. These changes occurred regardless of the proportion of inaccurate statements, potentially indicating participants expect politicians to be accurate more than half the time. Finally, although we found division based on both party and Brexit alignment, effects were much stronger for party alignment, highlighting that even though new divisions have emerged in UK politics, the old divides remain dominant.
Collapse
Affiliation(s)
- Toby Prike
- School of Psychological Science, University of Western Australia, Perth, Australia
| | - Robert Reason
- School of Psychological Science, University of Bristol, Bristol, UK
| | - Ullrich K. H. Ecker
- School of Psychological Science, University of Western Australia, Perth, Australia
- Public Policy Institute, University of Western Australia, Perth, Australia
| | - Briony Swire-Thompson
- Network Science Institute, Northeastern University, Boston, MA, USA
- Institute of Quantitative Social Science, Harvard University, Cambridge, MA, USA
| | - Stephan Lewandowsky
- School of Psychological Science, University of Western Australia, Perth, Australia
- School of Psychological Science, University of Bristol, Bristol, UK
- Department of Psychology, University of Potsdam, Potsdam, Germany
| |
Collapse
|
12
|
van Swol LM, Polman E, Paik JE, Chang CT. Effects of Gain/Loss Frames on Telling Lies of Omission and Commission. Cogn Emot 2022; 36:1287-1298. [PMID: 35881056 DOI: 10.1080/02699931.2022.2105307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
An increased focus on fake news and misinformation is currently emerging. But what does it mean when information is designated as "fake?" Research on deception has focused on lies of commission, in which people disclose something false as true. However, people can also lie by omission, by withholding important yet true information. In this research, we investigate when people are more likely to tell a lie of omission. In three studies, with tests among undergraduates, online sample respondents, and candidates for U.S. Senate, we found that people in a gain frame were more likely to lie by omission (vs. commission), and vice versa for a loss frame. Moreover, participants rated lies of commission in a gain frame as the least acceptable type of deception, suggesting why people may avoid telling this kind of lie. Overall, our results emphasize that from frame-to-frame, lying is not only different in degree but different in kind.
Collapse
Affiliation(s)
- Lyn M van Swol
- Department of Communication Arts, University of Wisconsin-Madison, Madison, WI, USA
| | - Evan Polman
- Department of Marketing, University of Wisconsin-Madison, Madison, WI, USA
| | - Jihyun Esther Paik
- Department of Communication Studies, Texas Christian University, Fort Worth, TX USA
| | - Chen-Ting Chang
- Department of Communication Arts, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
13
|
Kolluri N, Liu Y, Murthy D. COVID-19 Misinformation Detection: Machine-Learned Solutions to the Infodemic. JMIR Infodemiology 2022; 2:e38756. [PMID: 37113446 PMCID: PMC9987189 DOI: 10.2196/38756] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 07/20/2022] [Accepted: 08/08/2022] [Indexed: 04/29/2023]
Abstract
Background The volume of COVID-19-related misinformation has long exceeded the resources available to fact checkers to effectively mitigate its ill effects. Automated and web-based approaches can provide effective deterrents to online misinformation. Machine learning-based methods have achieved robust performance on text classification tasks, including potentially low-quality-news credibility assessment. Despite the progress of initial, rapid interventions, the enormity of COVID-19-related misinformation continues to overwhelm fact checkers. Therefore, improvement in automated and machine-learned methods for an infodemic response is urgently needed. Objective The aim of this study was to achieve improvement in automated and machine-learned methods for an infodemic response. Methods We evaluated three strategies for training a machine-learning model to determine the highest model performance: (1) COVID-19-related fact-checked data only, (2) general fact-checked data only, and (3) combined COVID-19 and general fact-checked data. We created two COVID-19-related misinformation data sets from fact-checked "false" content combined with programmatically retrieved "true" content. The first set contained ~7000 entries from July to August 2020, and the second contained ~31,000 entries from January 2020 to June 2022. We crowdsourced 31,441 votes to human label the first data set. Results The models achieved an accuracy of 96.55% and 94.56% on the first and second external validation data set, respectively. Our best-performing model was developed using COVID-19-specific content. We were able to successfully develop combined models that outperformed human votes of misinformation. Specifically, when we blended our model predictions with human votes, the highest accuracy we achieved on the first external validation data set was 99.1%. When we considered outputs where the machine-learning model agreed with human votes, we achieved accuracies up to 98.59% on the first validation data set. This outperformed human votes alone with an accuracy of only 73%. Conclusions External validation accuracies of 96.55% and 94.56% are evidence that machine learning can produce superior results for the difficult task of classifying the veracity of COVID-19 content. Pretrained language models performed best when fine-tuned on a topic-specific data set, while other models achieved their best accuracy when fine-tuned on a combination of topic-specific and general-topic data sets. Crucially, our study found that blended models, trained/fine-tuned on general-topic content with crowdsourced data, improved our models' accuracies up to 99.7%. The successful use of crowdsourced data can increase the accuracy of models in situations when expert-labeled data are scarce. The 98.59% accuracy on a "high-confidence" subsection comprised of machine-learned and human labels suggests that crowdsourced votes can optimize machine-learned labels to improve accuracy above human-only levels. These results support the utility of supervised machine learning to deter and combat future health-related disinformation.
Collapse
Affiliation(s)
- Nikhil Kolluri
- Computational Media Lab Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX United States
| | - Yunong Liu
- School of Engineering College of Science and Engineering University of Edinburgh Edinburgh United Kingdom
| | - Dhiraj Murthy
- Computational Media Lab School of Journalism and Media, Moody College of Communication The University of Texas at Austin Austin, TX United States
| |
Collapse
|
14
|
Chen Q, Zhang Y, Evans R, Min C. Why Do Citizens Share COVID-19 Fact-Checks Posted by Chinese Government Social Media Accounts? The Elaboration Likelihood Model. Int J Environ Res Public Health 2021; 18:ijerph181910058. [PMID: 34639361 PMCID: PMC8508168 DOI: 10.3390/ijerph181910058] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 09/15/2021] [Accepted: 09/21/2021] [Indexed: 11/16/2022]
Abstract
Widespread misinformation about COVID-19 poses a significant threat to citizens long-term health and the combating of the disease. To fight the spread of misinformation, Chinese governments have used official social media accounts to participate in fact-checking activities. This study aims to investigate why citizens share fact-checks about COVID-19 and how to promote this activity. Based on the elaboration likelihood model, we explore the effects of peripheral cues (social media capital, social media strategy, media richness, and source credibility) and central cues (content theme and content importance) on the number of shares of fact-checks posted by official Chinese Government social media accounts. In total, 820 COVID-19 fact-checks from 413 Chinese Government Sina Weibo accounts were obtained and evaluated. Results show that both peripheral and central cues play important roles in the sharing of fact-checks. For peripheral cues, social media capital and media richness significantly promote the number of shares. Compared with the push strategy, both the pull strategy and networking strategy facilitate greater fact-check sharing. Fact-checks posted by Central Government social media accounts receive more shares than local government accounts. For central cues, content importance positively predicts the number of shares. In comparison to fact-checks about the latest COVID-19 news, government actions received fewer shares, while social conditions received more shares.
Collapse
Affiliation(s)
- Qiang Chen
- School of Journalism and New Media, Xi’an Jiaotong University, Xi’an 710049, China; (Q.C.); (Y.Z.)
| | - Yangyi Zhang
- School of Journalism and New Media, Xi’an Jiaotong University, Xi’an 710049, China; (Q.C.); (Y.Z.)
| | - Richard Evans
- Faculty of Computer Science, Dalhousie University, Halifax, NS B3H 4R2, Canada;
| | - Chen Min
- College of Public Administration, Huazhong University of Science and Technology, Wuhan 430074, China
- Department of Media and Communication, City University of Hong Kong, Hong Kong 999077, Hong Kong
- Correspondence:
| |
Collapse
|
15
|
Porter E, Wood TJ. The global effectiveness of fact-checking: Evidence from simultaneous experiments in Argentina, Nigeria, South Africa, and the United Kingdom. Proc Natl Acad Sci U S A 2021; 118:e2104235118. [PMID: 34507996 DOI: 10.1073/pnas.2104235118] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2021] [Indexed: 11/18/2022] Open
Abstract
The spread of misinformation is a global phenomenon, with implications for elections, state-sanctioned violence, and health outcomes. Yet, even though scholars have investigated the capacity of fact-checking to reduce belief in misinformation, little evidence exists on the global effectiveness of this approach. We describe fact-checking experiments conducted simultaneously in Argentina, Nigeria, South Africa, and the United Kingdom, in which we studied whether fact-checking can durably reduce belief in misinformation. In total, we evaluated 22 fact-checks, including two that were tested in all four countries. Fact-checking reduced belief in misinformation, with most effects still apparent more than 2 wk later. A meta-analytic procedure indicates that fact-checks reduced belief in misinformation by at least 0.59 points on a 5-point scale. Exposure to misinformation, however, only increased false beliefs by less than 0.07 points on the same scale. Across continents, fact-checks reduce belief in misinformation, often durably so.
Collapse
|
16
|
Abstract
Countering misinformation can reduce belief in the moment, but corrective messages quickly fade from memory. We tested whether the longer-term impact of fact-checks depends on when people receive them. In two experiments (total N = 2,683), participants read true and false headlines taken from social media. In the treatment conditions, “true” and “false” tags appeared before, during, or after participants read each headline. Participants in a control condition received no information about veracity. One week later, participants in all conditions rated the same headlines’ accuracy. Providing fact-checks after headlines (debunking) improved subsequent truth discernment more than providing the same information during (labeling) or before (prebunking) exposure. This finding informs the cognitive science of belief revision and has practical implications for social media platform designers.
Collapse
|
17
|
Song Y, Ko L, Jang SH. The South Korean Government's Response to Combat COVID-19 Misinformation: Analysis of "Fact and Issue Check" on the Korea Centers for Disease Control and Prevention Website. Asia Pac J Public Health 2021; 33:620-622. [PMID: 33955279 DOI: 10.1177/10105395211014705] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study aimed to examine the types of misinformation spreading in South Korea during the coronavirus disease 2019 (COVID-19) pandemic by exploring the fact-checking posts uploaded on the Korea Centers for Disease Control and Prevention (KCDC) website. We conducted a content analysis of the posts written on the KCDC website titled, "COVID-19: Fact and Issue Check," from February to August 2020 (n = 81). Two coders individually coded the posts using a codebook. Discrepancies in coding were discussed to reach reconciliation. Fifteen different Korean government agencies used the KCDC platform to refute various topics of COVID-19 misinformation, including policy (42.0%), how to prevent the spread (16.0%), health care professionals (12.3%), testing (11.1%), prevention (self-care) (9.9%), masks (8.6%), confirmed cases (8.6%), statistics (3.7%), self-quarantine (2.5%), and treatment (1.2%). We found that there are more dissemination and correction of nonmedical areas of COVID-19 misinformation than medical areas in Korea. Future studies need to examine to what extent the corrected COVID-19 misinformation has been disseminated on different social media platforms, beyond the KCDC website.
Collapse
Affiliation(s)
- Yaena Song
- Teachers College Columbia University, New York, NY, USA
| | - Linda Ko
- University of Washington, Seattle, WA, USA
| | | |
Collapse
|
18
|
López-García X, Costa-Sánchez C, Vizoso Á. Journalistic Fact-Checking of Information in Pandemic: Stakeholders, Hoaxes, and Strategies to Fight Disinformation during the COVID-19 Crisis in Spain. Int J Environ Res Public Health 2021; 18:1227. [PMID: 33573013 PMCID: PMC7908612 DOI: 10.3390/ijerph18031227] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 01/13/2021] [Accepted: 01/25/2021] [Indexed: 12/01/2022]
Abstract
The public health crisis created by COVID-19 represents a challenge for journalists and the media. Specialised information in healthcare and science has turned into a need to deal with the current situation as well as the demand for information by society. In this context of increased uncertainty, the circulation of fake news on social networks and messaging applications has proliferated, producing what has been known as 'infodemic'. This paper is focused on the fact-checking of journalistic content using a combined methodology: content analysis of information denied by the main Spanish fact-checking platforms (Maldita and Newtral) and an in-depth questionnaire to these stakeholders. The results confirm the quantitative and qualitative evolution of disinformation. Quantitatively, more fact-checking is performed during the state of alarm. Qualitatively, hoaxes increase in complexity as the pandemic evolves, in such a way that disinformation engineering takes place, and it is expected to continue until the development of a vaccine.
Collapse
Affiliation(s)
- Xosé López-García
- Novos Medios Research Group, Universidade de Santiago de Compostela, 15782 Santiago, Spain; (X.L.-G.); (Á.V.)
| | - Carmen Costa-Sánchez
- Culture and Interactive Communication Research Group, Universidade da Coruña, 15008 A Coruña, Spain
| | - Ángel Vizoso
- Novos Medios Research Group, Universidade de Santiago de Compostela, 15782 Santiago, Spain; (X.L.-G.); (Á.V.)
| |
Collapse
|
19
|
Liu PL, Huang LV. Digital Disinformation About COVID-19 and the Third-Person Effect: Examining the Channel Differences and Negative Emotional Outcomes. Cyberpsychol Behav Soc Netw 2020; 23:789-793. [PMID: 32757953 DOI: 10.1089/cyber.2020.0363] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Expanding third-person effect (TPE) research to digital disinformation, this article investigates the impact of COVID-19 digital fake news exposure on individuals' perceived susceptibility of influence on themselves, their close others, and their distant others. Findings from a survey of 511 Chinese respondents suggest that, overall, individuals would perceive themselves to be less vulnerable than close others and distant others to the impact of COVID-19 digital disinformation. The highest self-other perceptual discrepancy is found when individuals receive disinformation on mobile social networking apps. Also, individuals who practice more active fact-checking perceive themselves to be less susceptible. The perception of disinformation effects on self as well as the self-other perceptual discrepancy is both positively related to emotional responses (anxiety, fear, and worry) to the COVID-19 pandemic. This study contributes to existing research by linking exposure to disinformation in different digital channels, the TPEs, and emotional outcomes in the context of a public health crisis. It also highlights the importance of educating and enabling fact-checking behaviors on digital media, which could help to reduce negative emotional impact of the disinformation.
Collapse
Affiliation(s)
- Piper Liping Liu
- Department of Communications and New Media, National University of Singapore, Singapore, Singapore
| | - Lei Vincent Huang
- Department of Communication Studies, Hong Kong Baptist University, Hong Kong, Hong Kong
| |
Collapse
|
20
|
Abstract
Medical students increasingly utilize social media platforms to supplement their preclinical learning; however, the prevalence of social media use for physiology learning in medical education remains unclear. The aim of the present study was to determine how first-year medical students from both direct entry medicine and graduate entry medicine interacted with social media as a learning tool by assessing its prevalence, perceived benefits, favored platforms, and reason(s) for its use. Seventy-one percent of surveyed students (out of 139 participants) stated that they interacted with social media in general more than 12 times per week. However, 98% had previously used internet platforms to source physiology information, with 89.2% doing so at least once per week during term. YouTube was the primary source of learning for 76% of students. Significantly, 94% of students indicated that they would first search for answers online if they did not understand something in physiology rather than contacting their instructor in person or by e-mail. However, only 31% of students "fact-checked" physiology information obtained from online sources, by using textbooks, papers, and/or instructors. Our study has revealed that most preclinical medical students utilize social media extensively to study physiology. However, the absence of academic and ethical oversight, paired with students' lack of critical appraisal of possibly inaccurate information, does raise concerns about the overall utility of social media as part of physiology education.
Collapse
Affiliation(s)
- Dervla O'Malley
- Department of Physiology, University College Cork, Cork, Ireland
| | - Denis S Barry
- Department of Anatomy, Trinity College Dublin, Dublin 2, Ireland
| | - Mark G Rae
- Department of Physiology, University College Cork, Cork, Ireland
| |
Collapse
|
21
|
Abstract
Fact‐checking has become an important feature of the modern media landscape. However, it is unclear what the most effective format of fact‐checks is. Some have argued that simple retractions that repeat a false claim and tag it as false may backfire because they boost the claim's familiarity. More detailed refutations may provide a more promising approach, but may not be feasible under the severe space constraints associated with social‐media communication. In two experiments, we tested whether (1) simple ‘false‐tag’ retractions can indeed be ineffective or harmful; and (2) short‐format (140‐character) refutations are more effective than simple retractions. Regarding (1), simple retractions reduced belief in false claims, and we found no evidence for a familiarity‐driven backfire effect. Regarding (2), short‐format refutations were found to be more effective than simple retractions after a 1‐week delay but not a one‐day delay. At both delays, however, they were associated with reduced misinformation‐congruent reasoning.
Collapse
Affiliation(s)
- Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Ziggy O'Reilly
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Jesse S Reid
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Ee Pin Chang
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| |
Collapse
|
22
|
Aird MJ, Ecker UKH, Swire B, Berinsky AJ, Lewandowsky S. Does truth matter to voters? The effects of correcting political misinformation in an Australian sample. R Soc Open Sci 2018; 5:180593. [PMID: 30662715 PMCID: PMC6304148 DOI: 10.1098/rsos.180593] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 11/09/2018] [Indexed: 06/09/2023]
Abstract
In the 'post-truth era', political fact-checking has become an issue of considerable significance. A recent study in the context of the 2016 US election found that fact-checks of statements by Donald Trump changed participants' beliefs about those statements-regardless of whether participants supported Trump-but not their feelings towards Trump or voting intentions. However, the study balanced corrections of inaccurate statements with an equal number of affirmations of accurate statements. Therefore, the null effect of fact-checks on participants' voting intentions and feelings may have arisen because of this artificially created balance. Moreover, Trump's statements were not contrasted with statements from an opposing politician, and Trump's perceived veracity was not measured. The present study (N = 370) examined the issue further, manipulating the ratio of corrections to affirmations, and using Australian politicians (and Australian participants) from both sides of the political spectrum. We hypothesized that fact-checks would correct beliefs and that fact-checks would affect voters' support (i.e. voting intentions, feelings and perceptions of veracity), but only when corrections outnumbered affirmations. Both hypotheses were supported, suggesting that a politician's veracity does sometimes matter to voters. The effects of fact-checking were similar on both sides of the political spectrum, suggesting little motivated reasoning in the processing of fact-checks.
Collapse
Affiliation(s)
- Michael J. Aird
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Ullrich K. H. Ecker
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Briony Swire
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
- Department of Political Science, Northeastern University, Boston, MA, USA
- Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Adam J. Berinsky
- Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephan Lewandowsky
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
- School of Experimental Psychology and Cabot Institute, University of Bristol, Bristol, UK
| |
Collapse
|
23
|
Abstract
Today's media landscape affords people access to richer information than ever before, with many individuals opting to consume content through social channels rather than traditional news sources. Although people frequent social platforms for a variety of reasons, we understand little about the consequences of encountering new information in these contexts, particularly with respect to how content is scrutinized. This research tests how perceiving the presence of others (as on social media platforms) affects the way that individuals evaluate information-in particular, the extent to which they verify ambiguous claims. Eight experiments using incentivized real effort tasks found that people are less likely to fact-check statements when they feel that they are evaluating them in the presence of others compared with when they are evaluating them alone. Inducing vigilance immediately before evaluation increased fact-checking under social settings.
Collapse
|
24
|
Abstract
Today's media landscape affords people access to richer information than ever before, with many individuals opting to consume content through social channels rather than traditional news sources. Although people frequent social platforms for a variety of reasons, we understand little about the consequences of encountering new information in these contexts, particularly with respect to how content is scrutinized. This research tests how perceiving the presence of others (as on social media platforms) affects the way that individuals evaluate information-in particular, the extent to which they verify ambiguous claims. Eight experiments using incentivized real effort tasks found that people are less likely to fact-check statements when they feel that they are evaluating them in the presence of others compared with when they are evaluating them alone. Inducing vigilance immediately before evaluation increased fact-checking under social settings.
Collapse
|