1
|
Bang C, Carroll K, Mistry N, Presseau J, Hudek N, Yanikomeroglu S, Brehaut JC. Use of Implementation Science Concepts in the Study of Misinformation: A Scoping Review. HEALTH EDUCATION & BEHAVIOR 2025; 52:340-353. [PMID: 39691052 DOI: 10.1177/10901981241303871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2024]
Abstract
Misinformation hinders the impact of public health initiatives. Efforts to counter misinformation likely do not consider the full range of factors known to affect how individuals make decisions and act on them. Implementation science tools and concepts can facilitate the development of more effective interventions against health misinformation by leveraging advances in behavior specification, uptake of evidence, and theory-guided development and evaluation of complex interventions. We conducted a scoping review of misinformation literature reviews to document whether and how important concepts from implementation science have already informed the study of misinformation. Of 90 included reviews, the most frequently identified implementation science concepts were consideration of mechanisms driving misinformation (78%) and ways to intervene on, reduce, avoid, or circumvent it (71%). Other implementation science concepts were discussed much less frequently, such as tailoring strategies to the relevant context (9%) or public involvement in intervention development (9%). Less than half of reviews (47%) were guided by any theory, model, or framework. Among the 26 reviews that cited existing theories, most used theory narratively (62%) or only mentioned/cited the theory (19%), rather than using theory explicitly to interpret results (15%) or to inform data extraction (12%). Despite considerable research and many summaries of how to intervene against health misinformation, there has been relatively little consideration of many important advances in the science of health care implementation. This review identifies key areas from implementation science that might be useful to support future research into designing effective misinformation interventions.
Collapse
Affiliation(s)
- Carla Bang
- McMaster University, Hamilton, Ontario, Canada
- The Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Kelly Carroll
- The Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Niyati Mistry
- The Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Justin Presseau
- The Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- University of Ottawa, Ottawa, Ontario, Canada
| | - Natasha Hudek
- The Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | | | - Jamie C Brehaut
- The Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
2
|
Piksa M, Zaniewska M, Cieslik-Starkiewicz A, Kunst J, Morzy M, Piasecki J, Rygula R. The link between tobacco smoking and susceptibility to misinformation. Psychopharmacology (Berl) 2025:10.1007/s00213-025-06802-1. [PMID: 40369169 DOI: 10.1007/s00213-025-06802-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2025] [Accepted: 04/29/2025] [Indexed: 05/16/2025]
Abstract
INTRODUCTION This study investigates the relationship between tobacco smoking and susceptibility to misinformation, an area that has been underexplored despite its potential implications for public health and media literacy. Smoking behavior, along with the pharmacological components present in tobacco, is often associated with habitual and cognitive patterns that may influence an individual's ability to critically evaluate and discern false information. By examining this potential link, the present study aims to shed light on the broader implications of smoking for societal challenges, such as the spread of misinformation. METHODS A quantitative online survey was conducted to collect data from a sample of 1,575 adult participants (Mage = 41.37, SD = 13.58; females: 54%, males: 46%) from the United Kingdom. Participants were categorized into three groups based on their smoking status: individuals who had smoked tobacco less than an hour before the study (n = 550), individuals who had smoked more than an hour before the study (n = 472), and non-smokers (n = 553). The survey incorporated questions assessing susceptibility to misinformation by annotating certain claims as false or true, and other instruments in order to control for impulsivity, stress level, physiological arousal and education level. RESULTS Smokers exhibited a lower ability to correctly recognize false claims than non-smokers. There was no difference between these groups in true news recognition. DISCUSSION The study, controlling for confounding factors, such as education and perceived stress, reveals that tobacco smoking may be associated with misinformation susceptibility. Further laboratory-based research should be conducted to explore the mechanisms underlying the observed relationship.
Collapse
Affiliation(s)
- Michal Piksa
- Maj Institute of Pharmacology Polish Academy of Sciences, Affective Cognitive Neuroscience Laboratory, 12 Smetna Street, Krakow, 31-343, Poland
| | - Magdalena Zaniewska
- Maj Institute of Pharmacology Polish Academy of Sciences, Affective Cognitive Neuroscience Laboratory, 12 Smetna Street, Krakow, 31-343, Poland
| | - Agata Cieslik-Starkiewicz
- Maj Institute of Pharmacology Polish Academy of Sciences, Affective Cognitive Neuroscience Laboratory, 12 Smetna Street, Krakow, 31-343, Poland
| | - Jonas Kunst
- Department of Communication and Culture, BI Norwegian Business School, Nydalsveien 37, Oslo, N-0484, Norway
| | - Mikolaj Morzy
- Faculty of Computing and Telecommunications, Poznan University of Technology, Piotrowo 2, Poznan, 60-965, Poland
| | - Jan Piasecki
- Faculty of Health Sciences, Department of Bioethics, Jagiellonian University Medical College, Kopernika 40, Krakow, 31-501, Poland
| | - Rafal Rygula
- Maj Institute of Pharmacology Polish Academy of Sciences, Affective Cognitive Neuroscience Laboratory, 12 Smetna Street, Krakow, 31-343, Poland.
| |
Collapse
|
3
|
Zhang SQ, Li MH, Li YC, Rao LL. Effects of childhood environments on the discernment of health misinformation. Soc Sci Med 2025; 380:118179. [PMID: 40393219 DOI: 10.1016/j.socscimed.2025.118179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Revised: 04/24/2025] [Accepted: 05/07/2025] [Indexed: 05/22/2025]
Abstract
The wide dissemination of COVID-19 and other health misinformation poses a significant threat to individuals' well-being. We investigated how two key features of childhood environments, uncertainty and harshness, influence individuals' ability to distinguish COVID-19 and other health-related truths from misinformation (i.e., accuracy discernment and sharing discernment). Across four studies (including two preregistered studies, total N = 4874), we found that greater childhood uncertainty was associated with worse accuracy discernment and sharing discernment, whereas greater childhood harshness was associated with better accuracy discernment. We also found that the associations between childhood environments and discernment were mediated by analytic thinking (Studies 1-3). Furthermore, recalling or imagining uncertain childhood events led to a decrease in sharing discernment (Study 4). These findings offer insight into how childhood environments influence the ability to discern truth from falsehood on social media later in life, which may contribute to the establishment of corresponding interventions to combat the negative impact of misinformation on public health.
Collapse
Affiliation(s)
- Si-Qi Zhang
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Ming-Hui Li
- School of Criminology, People's Public Security University of China, Beijing, 100038, China
| | - Yu-Chu Li
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Li-Lin Rao
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
4
|
Meshi D, Molina MD. Problematic social media use is associated with believing in and engaging with fake news. PLoS One 2025; 20:e0321361. [PMID: 40333705 PMCID: PMC12057946 DOI: 10.1371/journal.pone.0321361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 03/05/2025] [Indexed: 05/09/2025] Open
Abstract
Social media use is ubiquitous in our modern society, and some individuals display excessive, maladaptive use of these online platforms. This problematic social media use (PSMU) has been associated with greater impulsivity and risk-taking. Importantly, studies in healthy individuals have demonstrated that greater cognitive impulsivity is associated with a greater susceptibility to online "fake news." Therefore, we hypothesized that PSMU would be associated with believing in and engaging with fake news. To address this, we conducted an online, within-subject experiment in which participants (N=189; female=102, male=86, prefer not to disclose=1; mean age=19.8 years) completed a fake news task. This task presented participants with 20 news stories (10 real and 10 false, in random order) formatted as social media posts. We assessed participants' credibility judgments of these news posts, as well as participants' intentions to click, like, comment, and share these posts. We also assessed participants' degree of PSMU and then related this measure to their performance in our task. We conducted a repeated measures analysis of variance (ANOVA) with a mixed model approach, and it revealed that the greater one's PSMU, the more one finds specifically false news credible. We also found that the greater one's PSMU, the greater one's engagement with news posts, agnostic to the type of content (real or false). Finally, we found that the greater one's PSMU, the greater one's intent to click on specifically false news. Our research demonstrates that individuals who experience the most distress and impairment in daily functioning from social media use are also the most susceptible to false information posted on social media. We discuss the clinical implications of our findings.
Collapse
Affiliation(s)
- Dar Meshi
- Department of Advertising and Public Relations, Michigan State University, East Lansing, Michigan, United States of America
| | - Maria D. Molina
- Department of Advertising and Public Relations, Michigan State University, East Lansing, Michigan, United States of America
| |
Collapse
|
5
|
Ching D, Twomey J, Aylett MP, Quayle M, Linehan C, Murphy G. Can deepfakes manipulate us? Assessing the evidence via a critical scoping review. PLoS One 2025; 20:e0320124. [PMID: 40315197 PMCID: PMC12047760 DOI: 10.1371/journal.pone.0320124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 02/13/2025] [Indexed: 05/04/2025] Open
Abstract
Deepfakes are one of the most recent developments in misinformation technology and are capable of superimposing one person's face onto another in video format. The potential of this technology to defame and cause harm is clear. However, despite the grave concerns expressed about deepfakes, these concerns are rarely accompanied with empirical evidence. We present a scoping review of the existing empirical studies that aim to investigate the effects of viewing deepfakes on people's beliefs, memories, and behaviour. Five databases were searched, producing an initial sample of 2004 papers, from which 22 relevant papers were identified, varying in methodology and research methods used. Overall, we found that the early studies on this topic have often produced inconclusive findings regarding the existence of uniquely persuasive or convincing effects of deepfake exposure. Moreover, many experiments demonstrated poor methodology and did not include a non-deepfake comparator (e.g., text-based misinformation). We conclude that speculation and scare mongering about dystopian uses of deepfake technologies has far outpaced experimental research that assess these harms. We close by offering insights on how to conduct improved empirical work in this area.
Collapse
Affiliation(s)
- Didier Ching
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - John Twomey
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - Matthew P. Aylett
- CereProc Ltd, Edinburgh, United Kingdom
- University of Heriot Watt, Edinburgh, United Kingdom
| | - Michael Quayle
- Lero the Research Ireland Centre for Software, Limerick, Ireland
- Centre for Social Issues Research and Department of Psychology, University of Limerick, Limerick, Ireland
- Department of Psychology, School of Applied Human Sciences, University of KwaZulu-Natal, Pietermaritzburg, South Africa
| | - Conor Linehan
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - Gillian Murphy
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| |
Collapse
|
6
|
Schulz L, Streicher Y, Schulz E, Bhui R, Dayan P. Mechanisms of mistrust: A Bayesian account of misinformation learning. PLoS Comput Biol 2025; 21:e1012814. [PMID: 40367148 PMCID: PMC12077715 DOI: 10.1371/journal.pcbi.1012814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/21/2025] [Indexed: 05/16/2025] Open
Abstract
From the intimate realm of personal interactions to the sprawling arena of political discourse, discerning the trustworthy from the dubious is crucial. Here, we present a novel behavioral task and accompanying Bayesian models that allow us to study key aspects of this learning process in a tightly controlled setting. In our task, participants are confronted with several different types of (mis-)information sources, ranging from ones that lie to ones with biased reporting, and have to learn these attributes under varying degrees of feedback. We formalize inference in this setting as a doubly Bayesian learning process where agents simultaneously learn about the ground truth as well as the qualities of an information source reporting on this ground truth. Our model and detailed analyses reveal how participants can generally follow Bayesian learning dynamics, highlighting a basic human ability to learn about diverse information sources. This learning is also reflected in explicit trust reports about the sources. We additionally show how participants approached the inference problem with priors that held sources to be helpful. Finally, when outside feedback was noisier, participants still learned along Bayesian lines but struggled to pick up on biases in information. Our work pins down computationally the generally impressive human ability to learn the trustworthiness of information sources while revealing minor fault lines when it comes to noisier environments and news sources with a slant.
Collapse
Affiliation(s)
- Lion Schulz
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - Eric Schulz
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Helmholtz Institute for Human-Centered AI, Helmholtz Munich, Munich, Germany
| | - Rahul Bhui
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Peter Dayan
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- University of Tübingen, Tübingen, Germany
| |
Collapse
|
7
|
Balumbi M, Stang S, Suriah S, Syarif S, Putro G, Marwang S, Wijaya E. The importance of reproductive health education for elementary school children: Long-term benefits and challenges in implementation - A literature review. JOURNAL OF EDUCATION AND HEALTH PROMOTION 2025; 14:170. [PMID: 40400596 PMCID: PMC12094455 DOI: 10.4103/jehp.jehp_861_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Accepted: 08/19/2024] [Indexed: 05/23/2025]
Abstract
Reproductive health education at the primary school level is a controversial topic. Although some recognize its importance in providing accurate information about the body and development, others raise concerns about cultural sensitivities and age-appropriateness. This review intends to explore various aspects related to the importance of reproductive health education among primary school children, as well as the challenges and benefits associated. This article presents a literature review of previous studies on the implementation of health and reproductive education in elementary school children. An extensive search was conducted to identify relevant papers using relevant databases like ScienceDirect, PubMed, and Google Scholar. The articles included were selected if published between 2013 and 2023, in the English language, and have undergone a rigorous peer-review process. Our review identified substantial benefits of reproductive health education in primary schools. Studies showed a positive impact on reducing misconceptions about reproduction, promoting healthy attitudes towards the body, and potentially lowering risks of teenage pregnancy and sexually transmitted diseases. However, the review also revealed significant challenges. Cultural and religious sensitivities often lead to resistance from some communities. Additionally, ensuring age-appropriate language, content, and delivery methods remains a concern. The findings highlight the need for a balanced approach to reproductive health education in primary schools. While acknowledging cultural sensitivities, strategies like involving communities and using inclusive language can promote inclusivity. Open communication within families and well-trained teachers are crucial for effective reproductive health implementation. By addressing these challenges through inclusive and age-appropriate methods, reproductive health education programs can equip children with the knowledge and skills necessary for a healthy future.
Collapse
Affiliation(s)
- Musthamin Balumbi
- Doctoral Program in Public Health Sciences, Faculty of Public Health, Hasanuddin University, Makassar, Indonesia
| | - Stang Stang
- Department of Biostatistics and Reproductive Health, Faculty of Public Health, Hasanuddin University, Makassar, Indonesia
| | - Suriah Suriah
- Department of Health Promotion and Behavioral Science, Faculty of Public Health, Hasanuddin University, Makassar, Indonesia
| | - Syarifuddin Syarif
- Department of Electrical Engineering, Faculty of Engineering, Hasanuddin University, Makassar, Indonesia
| | - Gurendro Putro
- Center for Public Health and Nutrition Research, Health Research Organization, National Research and Innovation Agency, Jakarta, Indonesia
| | - Sumarni Marwang
- Department of Midwifery, Faculty of Nursing and Midwifery, Megarezky University, Makassar, Indonesia
| | - Eri Wijaya
- Doctoral Program in Public Health Sciences, Faculty of Public Health, Hasanuddin University, Makassar, Indonesia
- Department of Epidemiology, Faculty of Public Health, Hasanuddin University, Makassar, Indonesia
| |
Collapse
|
8
|
Horne BD, Nevo D. People adhere to content warning labels even when they are wrong due to ecologically rational adaptations. Sci Rep 2025; 15:13896. [PMID: 40263336 PMCID: PMC12015313 DOI: 10.1038/s41598-025-98221-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Accepted: 04/10/2025] [Indexed: 04/24/2025] Open
Abstract
In this paper, we build on the theory of ecologically rational heuristics to demonstrate the effect of erroneously placed warning labels on news headlines. Through three between-subjects experiments (n = 1313), we show that people rely on warning labels when choosing to trust news, even when those labels are wrong. We argue that this over-reliance on content labels is due to ecological rationality adaptations to current media environments, where warning labels are human-generated and mostly correct. Specifically, news consumers form heuristics based on past interactions with warning labels, and those heuristics can spill-over into new media environments where warning label errors are more frequent. The most important implication of these results is that it is more important to thoughtfully consider what information needs to be labeled than it is to attempt to label all false information. We discuss how this implication impacts our ability to scale warning label systems.
Collapse
Affiliation(s)
- Benjamin D Horne
- School of Information Sciences, University of Tennessee Knoxville, Knoxville, Tennessee, USA.
| | - Dorit Nevo
- Lally School of Management, Rensselaer Polytechnic Institute, Troy, New York, USA
| |
Collapse
|
9
|
Bayrak F, Kayatepe E, Özman N, Yilmaz O, Isler O, Saribay SA. Can reflection mitigate COVID-19 vaccine conspiracy beliefs and hesitancy? Psychol Health 2025:1-32. [PMID: 40254737 DOI: 10.1080/08870446.2025.2491598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 10/28/2024] [Accepted: 03/28/2025] [Indexed: 04/22/2025]
Abstract
OBJECTIVE DESIGN Periods of social turmoil, such as the COVID-19 pandemic, tend to amplify conspiracy beliefs, evidenced by increased vaccine hesitancy. Despite this trend, effective interventions targeting vaccine-related conspiracy beliefs remain scarce, partly due to underexplored cognitive processes. Three competing theoretical accounts offer differing predictions about the role of reflective thinking in supporting conspiracy beliefs: the Motivated Reasoning Account suggests reflection strengthens commitment to pre-existing attitudes; the Reflective Reasoning Account posits that reflection enhances belief accuracy; and the Reflective Doubt Account proposes reflection fosters general scepticism. MAIN OUTCOME MEASURES Utilising open science practices and a validated technique to activate reflection, we conducted an experimental investigation with a diverse sample (N = 1483) segmented by vaccine attitudes. We investigated the impact of reflection on specific and generic COVID-19 conspiracy beliefs and vaccine-support behaviours across pro-vaccine, neutral, and vaccine-hesitant groups, while examining the moderating effects of scientific literacy, intellectual humility, and actively open-minded thinking. RESULTS The confirmatory analysis provided no direct support for the theoretical predictions. However, findings indicated that intellectual humility significantly moderated the effect of reflection, enhancing vaccine-support behaviour among participants with high intellectual humility, highlighting the complex interplay of cognitive style and prior attitudes in shaping responses to conspiracy beliefs and vaccine-support actions. CONCLUSION The study highlights that while reflective thinking alone did not directly influence vaccine support behavior, its positive effect emerged among individuals with higher intellectual humility, emphasizing the importance of individual differences in shaping belief-related outcomes.
Collapse
Affiliation(s)
- Fatih Bayrak
- Department of Psychology, Baskent University, Ankara, Turkey
| | - Emre Kayatepe
- Department of Psychology, Kadir Has University, Istanbul, Turkey
| | - Nagihan Özman
- Department of Psychology, Kadir Has University, Istanbul, Turkey
| | - Onurcan Yilmaz
- Department of Psychology, Kadir Has University, Istanbul, Turkey
| | - Ozan Isler
- School of Economics, University of Queensland, St Lucia, Australia
| | - S Adil Saribay
- Department of Psychology, Kadir Has University, Istanbul, Turkey
| |
Collapse
|
10
|
Hwang Y, Jeong SH. Generative Artificial Intelligence and Misinformation Acceptance: An Experimental Test of the Effect of Forewarning About Artificial Intelligence Hallucination. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2025; 28:284-289. [PMID: 39992238 DOI: 10.1089/cyber.2024.0407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2025]
Abstract
Generative artificial intelligence (AI) tools could create statements that are seemingly plausible but factually incorrect. This is referred to as AI hallucination, which can contribute to the generation and dissemination of misinformation. Thus, the present study examines whether forewarning about AI hallucination could reduce individuals' acceptance of AI-generated misinformation. An online experiment with 208 Korean adults demonstrated that AI hallucination forewarning reduced misinformation acceptance (p = 0.001, Cohen's d = 0.45) while forewarning did not reduce acceptance of true information (p = 0.91). In addition, the effect of AI hallucination forewarning on misinformation acceptance was moderated by preference for effortful thinking (p < 0.01) such that forewarning decreased misinformation acceptance when preference for effortful thinking was high (vs. low).
Collapse
Affiliation(s)
- Yoori Hwang
- Department of Digital Media, Myongji University, Seodaemun-gu, Seoul, Korea
| | - Se-Hoon Jeong
- College of Media and Communication, Korea University, Seongbuk-gu, Seoul, Korea
| |
Collapse
|
11
|
Hubeny TJ, Nahon LS, Ng NL, Gawronski B. Who Falls for Misinformation and Why? PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2025:1461672251328800. [PMID: 40165403 DOI: 10.1177/01461672251328800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Misinformation is widespread, but only some people accept the false information they encounter. This raises two questions: Who falls for misinformation, and why do they fall for misinformation? To address these questions, two studies investigated associations between 15 individual-difference dimensions and judgments of misinformation as true. Using Signal Detection Theory, the studies further investigated whether the obtained associations are driven by individual differences in truth sensitivity, acceptance threshold, or myside bias. For both political misinformation (Study 1) and misinformation about COVID-19 vaccines (Study 2), truth sensitivity was positively associated with cognitive reflection and actively open-minded thinking, and negatively associated with bullshit receptivity and conspiracy mentality. Although acceptance threshold and myside bias explained considerable variance in judgments of misinformation as true, neither showed robust associations with the measured individual-difference dimensions. The findings provide deeper insights into individual differences in misinformation susceptibility and uncover critical gaps in their scientific understanding.
Collapse
Affiliation(s)
| | - Lea S Nahon
- University of Texas at Austin, Austin, TX, USA
| | - Nyx L Ng
- University of Texas at Austin, Austin, TX, USA
| | | |
Collapse
|
12
|
Martel C, Mosleh M, Eckles D, Rand DG. Promoting engagement with social fact-checks online: Investigating the roles of social connection and shared partisanship. PLoS One 2025; 20:e0319336. [PMID: 40163438 PMCID: PMC11957346 DOI: 10.1371/journal.pone.0319336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 01/30/2025] [Indexed: 04/02/2025] Open
Abstract
Social corrections - where users correct each other - can help rectify inaccurate beliefs. However, social corrections are often ignored. Here we ask under what conditions social corrections promote engagement from corrected users, allowing for greater insight into how users respond to debunking messages (even if such responses are negative). Prior work suggests two key factors may help promote engagement with corrections - partisan alignment between users, and social connections between users. We investigate these factors here. First, we conducted a field experiment on Twitter (X) using human-looking bots to examine how shared partisanship and prior social connection affect correction engagement. We randomized whether our accounts identified as Democrat or Republican, and whether they followed Twitter users and liked three of their tweets before correcting them (creating a minimal social connection). We found that shared partisanship had no significant effect in the baseline (no social connection) condition. Interestingly, social connection increased engagement with corrections from co-partisans. Effects in the social counter-partisan condition were ambiguous. Follow-up survey experiments largely replicated these results and found evidence for a generalized norm of responding, wherein people feel more obligated to respond to people who follow them - even outside the context of misinformation correction. Our findings have important implications for increasing engagement with social corrections online.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Mohsen Mosleh
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
| | - Dean Eckles
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
13
|
Chen W, Shi J, He Q. China's social fake news database release with brain structural, functional, and behavioural measures. Sci Data 2025; 12:538. [PMID: 40164637 PMCID: PMC11958673 DOI: 10.1038/s41597-025-04901-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2025] [Accepted: 03/25/2025] [Indexed: 04/02/2025] Open
Abstract
Fake news poses significant societal risks by spreading rapidly on social media. While existing research predominantly examines its propagation patterns and psychological drivers, the neural underpinnings remain insufficiently understood. Moreover, current studies often focus on Western political contexts, overlooking cultural variations where social-lifestyle fake news may be more prevalent, such as in China. In this paper, we introduce a multimodal dataset that combines neuroimaging, behavioral data, and standardized Chinese social-lifestyle fake and true news materials. The dataset includes T1 structural, resting-state, and task-based fMRI data from 43 college students, capturing brain activity during tasks involving sharing news and assessing its accuracy. Additionally, participants' trait and rating data were collected to explore individual differences in brain structure, intrinsic functional states, and responses to fake and true news. This dataset could inform future studies on misinformation, offering deeper insights into the neural and psychological aspects of fake news. An overview of the data acquisition, cleaning, and sharing procedures is presented.
Collapse
Affiliation(s)
- Wanting Chen
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
- Faculty of Psychology, Southwest University, Chongqing, China
| | - Jia Shi
- Faculty of Psychology, Southwest University, Chongqing, China
| | - Qinghua He
- Faculty of Psychology, Southwest University, Chongqing, China.
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China.
- Southwest University Branch, Collaborative Innovation Center of Assessment toward Basic Education Quality at Beijing Normal University, Chongqing, China.
| |
Collapse
|
14
|
Winters M, Christie S, Melchinger H, Iddrisu I, Al Hassan H, Ewart E, Mosley L, Alhassan R, Shani N, Nyamuame D, Lepage C, Thomson A, Atif AN, Omer SB. Debunking COVID-19 vaccine misinformation with an audio drama in Ghana, a randomized control trial. Sci Rep 2025; 15:8955. [PMID: 40089600 PMCID: PMC11910525 DOI: 10.1038/s41598-025-92731-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2024] [Accepted: 03/03/2025] [Indexed: 03/17/2025] Open
Abstract
Misinformation about COVID-19 vaccines has hampered their uptake worldwide. In Ghana, a belief that COVID-19 vaccines affect fertility is prevalent and difficult to counter. UNICEF Ghana co-produced a context-driven, behavioral science-based audio drama ('A shot of love') that aimed to debunk this misinformation narrative. In a randomized controlled trial, 13,000 young adults who had previously interacted with UNICEF's Agoo platform were randomized to either control (audio about nutrition) or intervention (audio drama debunking the COVID-19 misinformation). We found that the intervention had a strong protective effect against belief in misinformation, both directly after listening to the audio drama (adjusted Odds Ratio (aOR) 0.45, 95% Confidence Interval (CI) 0.34-0.59) as well as at the one-month follow-up (aOR 0.66, 95% CI 0.49-0.91). Similarly, the intervention had a strong effect on perceived safety of the COVID-19 vaccines directly after listening to the audio drama (aOR 1.56, 95% CI 1.22-2.00) and at one-month follow-up (aOR 1.53, 95% CI 1.13-2.07). Overall, our behavioral science-based, context-driven audio drama was effective in reducing the strength of belief in COVID-19 vaccine misinformation and increasing the perceived safety of the vaccines in Ghana.
Collapse
Affiliation(s)
- Maike Winters
- Yale Institute for Global Health, Yale University, New Haven, CT, USA
| | - Sarah Christie
- Yale School of Public Health, Yale University, New Haven, CT, USA
| | - Hannah Melchinger
- Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | | | | | | | | | | | | | | | | | | | | | - Saad B Omer
- Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
15
|
Albtoush ES, Gan KH, Alrababa SAA. Fake news detection: state-of-the-art review and advances with attention to Arabic language aspects. PeerJ Comput Sci 2025; 11:e2693. [PMID: 40134874 PMCID: PMC11935763 DOI: 10.7717/peerj-cs.2693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 01/20/2025] [Indexed: 03/27/2025]
Abstract
The proliferation of fake news has become a significant threat, influencing individuals, institutions, and societies at large. This issue has been exacerbated by the pervasive integration of social media into daily life, directly shaping opinions, trends, and even the economies of nations. Social media platforms have struggled to mitigate the effects of fake news, relying primarily on traditional methods based on human expertise and knowledge. Consequently, machine learning (ML) and deep learning (DL) techniques now play a critical role in distinguishing fake news, necessitating their extensive deployment to counter the rapid spread of misinformation across all languages, particularly Arabic. Detecting fake news in Arabic presents unique challenges, including complex grammar, diverse dialects, and the scarcity of annotated datasets, along with a lack of research in the field of fake news detection compared to English. This study provides a comprehensive review of fake news, examining its types, domains, characteristics, life cycle, and detection approaches. It further explores recent advancements in research leveraging ML, DL, and transformer-based techniques for fake news detection, with a special attention to Arabic. The research delves into Arabic-specific pre-processing techniques, methodologies tailored for fake news detection in the language, and the datasets employed in these studies. Additionally, it outlines future research directions aimed at developing more effective and robust strategies to address the challenge of fake news detection in Arabic content.
Collapse
Affiliation(s)
| | - Keng Hoon Gan
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor, Malaysia
| | | |
Collapse
|
16
|
Gander P, Szita K, Falck A, Lowe R. Memory of Fictional Information: A Theoretical Framework. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2025; 20:308-324. [PMID: 37916977 PMCID: PMC11881525 DOI: 10.1177/17456916231202500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Much of the information people encounter in everyday life is not factual; it originates from fictional sources, such as movies, novels, and video games, and from direct experience such as pretense, role-playing, and everyday conversation. Despite the recent increase in research on fiction, there is no theoretical account of how memory of fictional information is related to other types of memory or of which mechanisms allow people to separate fact and fiction in memory. We present a theoretical framework that places memory of fiction in relation to other cognitive phenomena as a distinct construct and argue that it is an essential component for any general theory of human memory. We show how fictionality can be integrated in an existing memory model by extending Rubin's dimensional conceptual memory model. By this means, our model can account for explicit and implicit memory of fictional information of events, places, characters, and objects. Further, we propose a set of mechanisms involving various degrees of complexity and levels of conscious processing that mostly keep fact and fiction separated but also allow information from fiction to influence real-world attitudes and beliefs: content-based reasoning, source monitoring, and an associative link from the memory to the concept of fiction.
Collapse
Affiliation(s)
- Pierre Gander
- Department of Applied Information Technology, University of Gothenburg
| | - Kata Szita
- Trinity Long Room Hub Arts & Humanities Research Institute, Trinity College Dublin
- ADAPT Centre of Excellence for AI-Driven Digital Content Technology, Trinity College Dublin
| | - Andreas Falck
- Department of Special Needs Education, University of Oslo
| | - Robert Lowe
- Department of Applied Information Technology, University of Gothenburg
| |
Collapse
|
17
|
Lemaire M, Ye S, Le Stanc L, Borst G, Cassotti M. The development of media truth discernment and fake news detection is related to the development of reasoning during adolescence. Sci Rep 2025; 15:6854. [PMID: 40011547 PMCID: PMC11865587 DOI: 10.1038/s41598-025-90427-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 02/13/2025] [Indexed: 02/28/2025] Open
Abstract
The spread of online fake news is emerging as a major threat to human society and democracy. Previous studies have investigated media truth discernment among adults but not among adolescents. Adolescents might face a greater risk of believing fake news, particularly fake news that is shared via social media, because of their vulnerabilities in terms of reasoning. In the present study, we investigated (1) the development of media truth discernment and the illusory truth effect from adolescence to adulthood and (2) whether the development of media truth discernment and the illusory truth effect are related to the development of reasoning ability. To accomplish this task, we recruited 432 adolescents aged 11 to 14 years as well as 132 adults. Participants were asked to rate the perceived accuracy of both real and fake news headlines. Participants were exposed to half of the news items before entering the rating phase. Finally, participants completed the Cognitive Reflection Test (CRT). Media truth discernment (i.e., the difference between participants' ratings of fake and real news) developed linearly with increasing age, and participants rated familiarized headlines as more accurate than novel headlines at all ages (i.e., the illusory truth effect). Finally, media truth development (but not the illusory truth effect) was related to the development of reasoning abilities with increasing age. Our findings highlight the urgent need to improve logical thinking among adolescents to help them detect fake news online.
Collapse
Affiliation(s)
- Marine Lemaire
- Université Paris Cité, LaPsyDÉ, CNRS, Paris, F-75005, France.
| | - Steeven Ye
- Université Paris Cité, LaPsyDÉ, CNRS, Paris, F-75005, France
- IPSOS France, Global Science Organisation, Paris, France
| | - Lorna Le Stanc
- Université Paris Cité, LaPsyDÉ, CNRS, Paris, F-75005, France
| | - Grégoire Borst
- Université Paris Cité, LaPsyDÉ, CNRS, Paris, F-75005, France
- Institut Universitaire de France, Paris, F-75005, France
| | | |
Collapse
|
18
|
Shirasuna M, Kagawa R, Honda H. Pause before action: Waiting short time as a simple and resource-rational boost. Sci Rep 2025; 15:4362. [PMID: 39910115 PMCID: PMC11799143 DOI: 10.1038/s41598-025-87119-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 01/16/2025] [Indexed: 02/07/2025] Open
Abstract
Many workers today engage in straightforward judgment tasks, increasing the need for interventions to improve accuracy. We propose a resource-rational and psychohygienic intervention, "wait short time", which introduces a brief pause before displaying alternatives. This pause works as a harmonious triad: it clears the mind of prior judgment bias, restores present attention, and prepares the mind for future judgments; and all this without additional instructions. Based on a resource rationality framework, cognitive costs (e.g., irritation, cognitive conflict) are associated with prolonged thinking because of humans limited cognitive resources. Therefore, there should be an appropriately short thinking time to achieve higher accuracy with minimal workload. We investigated the effectiveness of the proposed intervention both theoretically and empirically. The computer simulations demonstrated that, under assumptions of limited cognitive resources, there was an optimal time at the early stages for maximizing total benefits. The results of behavioral experiment was consistent with the theoretical findings: Providing a waiting time (1 s or 2.5 s) improved judgment accuracy, but cognitive conflicts increased over time and an unnecessarily long time (2.5 s) induced more subjective irritation. Consequently, an appropriate time (1 s) could enhance judgment accuracy with less workload. We discuss the implications and limitations of the proposed intervention.
Collapse
Affiliation(s)
- Masaru Shirasuna
- Faculty of Psychology, Otemon Gakuin University, 2-1-15, Nishiai, Ibaraki-Shi, Osaka, 567-8502, Japan.
- Faculty of Informatics, Shizuoka University, 3-5-1, Johoku, Chuo-Ku, Hamamatsu-Shi, Shizuoka, 432-8011, Japan.
| | - Rina Kagawa
- Institute of Medicine, University of Tsukuba, 1-1-1 Tennoudai, Tsukuba-Shi, Ibaraki, 305-8575, Japan
| | - Hidehito Honda
- Faculty of Psychology, Otemon Gakuin University, 2-1-15, Nishiai, Ibaraki-Shi, Osaka, 567-8502, Japan.
| |
Collapse
|
19
|
Pecile G, Di Marco N, Cinelli M, Quattrociocchi W. Mapping the global election landscape on social media in 2024. PLoS One 2025; 20:e0316271. [PMID: 39908218 PMCID: PMC11798462 DOI: 10.1371/journal.pone.0316271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 12/05/2024] [Indexed: 02/07/2025] Open
Abstract
In 2024, a significant portion of the global population will participate in elections, creating an opportunity to analyze how information spreads and how users behave on social media. This study examines the media landscape on Facebook by analyzing posts from political parties and major news outlets in Europe, Mexico, and India. By identifying key topics and measuring public engagement, we uncover patterns in political discourse. Using Principal Component Analysis, we explore how these topics are related and distinguish trends in audience interaction. Our findings show how certain topics resonate differently across political groups, providing insights into the relationship between media content, political ideology, and user engagement during elections.
Collapse
Affiliation(s)
- Giulio Pecile
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | - Niccolò Di Marco
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | - Matteo Cinelli
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | | |
Collapse
|
20
|
Wang J, Zhai Y, Shahzad F. Mapping the terrain of social media misinformation: A scientometric exploration of global research. Acta Psychol (Amst) 2025; 252:104691. [PMID: 39765143 DOI: 10.1016/j.actpsy.2025.104691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 12/31/2024] [Accepted: 01/02/2025] [Indexed: 01/11/2025] Open
Abstract
The rise of social media has enabled unrestricted information sharing, regardless of its accuracy. Unfortunately, this has also resulted in the widespread dissemination of misinformation. This study aims to provide a comprehensive scientometric analysis under the PRISMA paradigm to clarify the repetitive trajectory of misinformation on social media in the current digital age. In this study, 3724 publications on social media misinformation from the Web of Science between January 2010 and February 2024 were analyzed scientifically and metrically using CiteSpace software. The findings reveal a sharp increase in annual publication output starting from 2015. The United States of America and China have made more significant contributions in publication volume and global collaborations than other nations. The top five keywords with high frequency are social media, fake news, information, misinformation, and news. In contrast to a brief review of existing articles, this study provides an exhaustive review of annual scientific research output, journals, countries, institutions, contributors, highly cited papers, and keywords in social media misinformation research. The developmental stages of social media misinformation research are charted, current hot topics are discussed, and avenues for future research are suggested.
Collapse
Affiliation(s)
- Jian Wang
- College of Economics and Management, Zhengzhou University of Light Industry, Zhengzhou, China
| | - Yujia Zhai
- College of Economics and Management, Zhengzhou University of Light Industry, Zhengzhou, China
| | - Fakhar Shahzad
- Research Institute of Business Analytics and Supply Chain Management, College of Management, Shenzhen University, Shenzhen, China.
| |
Collapse
|
21
|
Battista F, Lanciano T, Curci A. A Survey on the Criteria Used to Judge (Fake) News in Italian Population. Brain Behav 2025; 15:e70315. [PMID: 39935047 PMCID: PMC11813807 DOI: 10.1002/brb3.70315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Revised: 12/23/2024] [Accepted: 01/15/2025] [Indexed: 02/13/2025] Open
Abstract
INTRODUCTION Fake news detection falls within the field of deception detection and, consequently, can be problematic due to no consensus on which cues increase the detection accuracy and because people's ability to detect is poor. METHODS The present study aimed to investigate the criteria used by general population to establish if a given news item is true or fake by surveying a sample of the Italian population. We recruited 329 participants who had to reply to some questions on which criteria they used to conclude a given news item was true. The same questions were also asked to investigate the ones used for fake news judgments. RESULTS AND CONCLUSION Our results showed that, overall, people use similar criteria (e.g., reliability of the source and presence of scientific references) to conclude that news is true versus fake, although their use rates differ for true and fake news.
Collapse
Affiliation(s)
- Fabiana Battista
- Department of Education, Psychology, CommunicationUniversity of Bari “Aldo Moro”BariItaly
| | - Tiziana Lanciano
- Department of Education, Psychology, CommunicationUniversity of Bari “Aldo Moro”BariItaly
| | - Antonietta Curci
- Department of Education, Psychology, CommunicationUniversity of Bari “Aldo Moro”BariItaly
| |
Collapse
|
22
|
Li J, Yang X. Does exposure necessarily lead to misbelief? A meta-analysis of susceptibility to health misinformation. PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2025; 34:222-242. [PMID: 39104361 DOI: 10.1177/09636625241266150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/07/2024]
Abstract
A meta-analysis was conducted to quantify the overall effect of health misinformation exposure on shaping misbelief. Aggregation of results from 28 individual randomized controlled trial studies (n = 8752) reveals a positive but small average effect, d = 0.28. Moderation analyses suggest that adults who are younger and female tend to develop higher misbelief if exposed to health misinformation. Furthermore, media platform, message falsity, and misbelief measurements also contribute to the exposure effect. These findings offer nuanced but crucial insights into existing misinformation literature, and development of more effective strategies to mitigate the adverse impacts of health misinformation.
Collapse
|
23
|
Remiro M, Jorge OS, Lotto M, Oliveira TM, Machado MAAM, Cruvinel T. Facebook users' engagement with dental caries misinformation in Brazilian Portuguese. CIENCIA & SAUDE COLETIVA 2025; 30:e06202023. [PMID: 39936671 DOI: 10.1590/1413-81232025302.06202023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 11/07/2023] [Indexed: 02/13/2025] Open
Abstract
This study analyzed dental caries-related Facebook posts in Brazilian Portuguese to identify misinformation and predict user interaction factors. A sample of 500 posts (between August 2016 and August 2021), was obtained by CrowdTangle. Two independent and calibrated investigators (intraclass correlation coefficient varying from 0.80 to 0.98) characterized the posts based on their time of publication, author's profile, sentiment, aim of content, motivation, and facticity. Most posts (90.2%) originated from Brazil, and they were predominantly shared by business profiles (94.2%). Approximately 67.2% of these posts focused on preventive dental issues, driven by noncommercial interests in 88.8% of cases. Misinformation was present in 39.6% of the posts, particularly those with a positive sentiment and commercial motivation. Business profiles and positive sentiment were identified as predictive factors for higher post engagement. These findings highlight a significant proportion of dental caries-related posts containing misinformation, especially when associated with positive emotions and commercial motivation.
Collapse
Affiliation(s)
- Mariana Remiro
- Departamento de Odontopediatria, Ortodontia e Saúde Coletiva, Faculdade de Odontologia de Bauru, Universidade de São Paulo. Alameda Dr. Octávio Pinheiro Brisolla, 9-75, Vila Universitária. 17012-901 Bauru SP Brasil.
| | - Olívia Santana Jorge
- Departamento de Odontopediatria, Ortodontia e Saúde Coletiva, Faculdade de Odontologia de Bauru, Universidade de São Paulo. Alameda Dr. Octávio Pinheiro Brisolla, 9-75, Vila Universitária. 17012-901 Bauru SP Brasil.
| | - Matheus Lotto
- Departamento de Odontopediatria, Ortodontia e Saúde Coletiva, Faculdade de Odontologia de Bauru, Universidade de São Paulo. Alameda Dr. Octávio Pinheiro Brisolla, 9-75, Vila Universitária. 17012-901 Bauru SP Brasil.
| | - Thaís Marchini Oliveira
- Departamento de Odontopediatria, Ortodontia e Saúde Coletiva, Faculdade de Odontologia de Bauru, Universidade de São Paulo. Alameda Dr. Octávio Pinheiro Brisolla, 9-75, Vila Universitária. 17012-901 Bauru SP Brasil.
| | - Maria Aparecida Andrade Moreira Machado
- Departamento de Odontopediatria, Ortodontia e Saúde Coletiva, Faculdade de Odontologia de Bauru, Universidade de São Paulo. Alameda Dr. Octávio Pinheiro Brisolla, 9-75, Vila Universitária. 17012-901 Bauru SP Brasil.
| | - Thiago Cruvinel
- Departamento de Odontopediatria, Ortodontia e Saúde Coletiva, Faculdade de Odontologia de Bauru, Universidade de São Paulo. Alameda Dr. Octávio Pinheiro Brisolla, 9-75, Vila Universitária. 17012-901 Bauru SP Brasil.
| |
Collapse
|
24
|
Gawronski B, Nahon LS, Ng NL. Debunking Three Myths About Misinformation. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2025; 34:36-42. [PMID: 39950192 PMCID: PMC11813693 DOI: 10.1177/09637214241280907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2025]
Abstract
Recent years have seen a surge in research on why people fall for misinformation and what can be done about it. Drawing on a framework that conceptualizes truth judgments of true and false information as a signal-detection problem, the current article identifies three inaccurate assumptions in the public and scientific discourse about misinformation: (1) People are bad at discerning true from false information, (2) partisan bias is not a driving force in judgments of misinformation, and (3) gullibility to false information is the main factor underlying inaccurate beliefs. Counter to these assumptions, we argue that (1) people are quite good at discerning true from false information, (2) partisan bias in responses to true and false information is pervasive and strong, and (3) skepticism against belief-incongruent true information is much more pronounced than gullibility to belief-congruent false information. These conclusions have significant implications for person-centered misinformation interventions to tackle inaccurate beliefs.
Collapse
Affiliation(s)
| | - Lea S. Nahon
- Department of Psychology, University of Texas at Austin
| | - Nyx L. Ng
- Department of Psychology, University of Texas at Austin
| |
Collapse
|
25
|
Smith F, Simchon A, Holford D, Lewandowsky S. Inoculation reduces social media engagement with affectively polarized content in the UK and US. COMMUNICATIONS PSYCHOLOGY 2025; 3:11. [PMID: 39865178 PMCID: PMC11769841 DOI: 10.1038/s44271-025-00189-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 01/10/2025] [Indexed: 01/28/2025]
Abstract
The generation and distribution of hyper-partisan content on social media has gained millions of exposure across platforms, often allowing malevolent actors to influence and disrupt democracies. The spread of this content is facilitated by real users' engaging with it on platforms. The current study tests the efficacy of an 'inoculation' intervention via six online survey-based experiments in the UK and US. Experiments 1-3 (total N = 3276) found that the inoculation significantly reduced self-reported engagement with polarising stimuli. However, Experiments 4-6 (total N = 1878) found no effects on participants' self-produced written text discussing the topic. The implications of these findings are discussed in the context of the literature on polarisation and previous interventions to reduce engagement with disinformation.
Collapse
Affiliation(s)
- Fintan Smith
- School of Psychological Science, University of Bristol, Bristol, UK
- YouGov PLC, London, UK
| | - Almog Simchon
- School of Psychological Science, University of Bristol, Bristol, UK
- Department of Psychology, Ben-Gurion University of the Negev, Beer Sheva, Israel
| | - Dawn Holford
- School of Psychological Science, University of Bristol, Bristol, UK
| | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, Bristol, UK.
- Department of Psychology, University of Potsdam, Potsdam, Germany.
- School of Psychological Science, University of Western Australia, Crawley, WA, Australia.
| |
Collapse
|
26
|
Wang J, Wang X, Yu A. Tackling misinformation in mobile social networks a BERT-LSTM approach for enhancing digital literacy. Sci Rep 2025; 15:1118. [PMID: 39774143 PMCID: PMC11707353 DOI: 10.1038/s41598-025-85308-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Accepted: 01/01/2025] [Indexed: 01/11/2025] Open
Abstract
The rapid proliferation of mobile social networks has significantly accelerated the dissemination of misinformation, posing serious risks to social stability, public health, and democratic processes. Early detection of misinformation is essential yet challenging, particularly in contexts where initial content propagation lacks user feedback and engagement data. This study presents a novel hybrid model that combines Bidirectional Encoder Representations from Transformers (BERT) with Long Short-Term Memory (LSTM) networks to enhance the detection of misinformation using only textual content. Extensive evaluations revealed that the BERT-LSTM model achieved an accuracy of 93.51%, a recall of 91.96%, and an F1 score of 92.73% in identifying misinformation. A controlled user study with 100 participants demonstrated the model's effectiveness as an educational tool, with the experimental group achieving 89.4% accuracy in misinformation detection compared to 74.2% in the control group, while showing increased confidence levels and reduced decision-making time. Beyond its technical efficacy, the model exhibits significant potential in fostering critical thinking skills necessary for digital literacy. The findings underscore the transformative potential of advanced AI techniques in addressing the challenges of misinformation in the digital age.
Collapse
Affiliation(s)
- Jun Wang
- School of Management Science and Engieering, Nanjing University of Information Science and Technology, Nanjing, China
- School of Artificial Intelligence, Nanjing Vocational College of Information Technology, Nanjing, China
| | - Xiulai Wang
- School of Management Science and Engieering, Nanjing University of Information Science and Technology, Nanjing, China
| | - Airong Yu
- The Army Engineering University of PLA, Nanjing, 211117, Jiangsu, China.
| |
Collapse
|
27
|
Kteily NS, Brandt MJ. Ideology: Psychological Similarities and Differences Across the Ideological Spectrum Reexamined. Annu Rev Psychol 2025; 76:501-529. [PMID: 39481018 DOI: 10.1146/annurev-psych-020124-115253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2024]
Abstract
A key debate in the psychology of ideology is whether leftists and rightists are psychologically similar or different. A long-standing view holds that left-wing and right-wing people are meaningfully different from one another across a whole host of basic personality and cognitive features. Scholars have recently pushed back, suggesting that left-wing and right-wing people are more psychologically similar than distinct. We review evidence regarding the psychological profiles of left-wing and right-wing people across a wide variety of domains, including their dispositions (values, personality, cognitive rigidity, threat-sensitivity, and authoritarianism), information processing (motivated reasoning and susceptibility to misinformation), and their interpersonal perceptions and behaviors (empathy, prejudice, stereotyping, and violence). Our review paints a nuanced picture: People across the ideological divide are much more similar than scholars sometimes appreciate. And yet, they differ-to varying degrees-in their personality, values, and (perhaps most importantly) in the groups and causes they prioritize, with important implications for downstream attitudes and behavior in the world.
Collapse
Affiliation(s)
- Nour S Kteily
- Kellogg School of Management, Northwestern University, Evanston, Illinois, USA;
| | - Mark J Brandt
- Department of Psychology, Michigan State University, East Lansing, Michigan, USA
| |
Collapse
|
28
|
Ditto PH, Celniker JB, Siddiqi SS, Güngör M, Relihan DP. Partisan Bias in Political Judgment. Annu Rev Psychol 2025; 76:717-740. [PMID: 39237099 DOI: 10.1146/annurev-psych-030424-122723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
This article reviews empirical data demonstrating robust ingroup favoritism in political judgment. Partisans display systematic tendencies to seek out, believe, and remember information that supports their political beliefs and affinities. However, the psychological drivers of partisan favoritism have been vigorously debated, as has its consistency with rational inference. We characterize decades-long debates over whether such tendencies violate normative standards of rationality, focusing on the phenomenon of motivated reasoning. In light of evidence that both motivational and cognitive factors contribute to partisan bias, we advocate for a descriptive approach to partisan bias research. Rather than adjudicating the (ir)rationality of partisan favoritism, future research should prioritize the identification and measurement of its predictors and clarify the cognitive mechanisms underlying motivated political reasoning. Ultimately, we argue that political judgment is best evaluated by a standard of ecological rationality based on its practical implications for individual well-being and functional democratic governance.
Collapse
Affiliation(s)
- Peter H Ditto
- Department of Psychological Science, University of California, Irvine, California, USA;
| | - Jared B Celniker
- Philosophy, Cognition, and Culture Lab, Arizona State University, Tempe, Arizona, USA
| | - Shiri Spitz Siddiqi
- Department of Psychological Science, University of California, Irvine, California, USA;
| | - Mertcan Güngör
- Department of Psychological Science, University of California, Irvine, California, USA;
| | - Daniel P Relihan
- Department of Psychological Science, University of California, Irvine, California, USA;
| |
Collapse
|
29
|
Lühring J, Shetty A, Koschmieder C, Garcia D, Waldherr A, Metzler H. Emotions in misinformation studies: distinguishing affective state from emotional response and misinformation recognition from acceptance. Cogn Res Princ Implic 2024; 9:82. [PMID: 39692779 DOI: 10.1186/s41235-024-00607-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 12/03/2024] [Indexed: 12/19/2024] Open
Abstract
Prior studies indicate that emotions, particularly high-arousal emotions, may elicit rapid intuitive thinking, thereby decreasing the ability to recognize misinformation. Yet, few studies have distinguished prior affective states from emotional reactions to false news, which could influence belief in falsehoods in different ways. Extending a study by Martel et al. (Cognit Res: Principles Implic 5: 1-20, 2020), we conducted a pre-registered online survey experiment in Austria (N = 422), investigating associations of emotions and discernment of false and real news related to COVID-19. We found no associations of prior affective state with discernment, but observed higher anger and less joy in response to false compared to real news. Exploratory analyses, including automated analyses of open-ended text responses, suggested that anger arose for different reasons in different people depending on their prior beliefs. In our educated and left-leaning sample, higher anger was often related to recognizing the misinformation as such, rather than accepting the false claims. We conclude that studies need to distinguish between prior affective state and emotional response to misinformation and consider individuals' prior beliefs as determinants of emotions.
Collapse
Affiliation(s)
- Jula Lühring
- Department of Communication, University of Vienna, Vienna, Austria.
- Complexity Science Hub, Metternichgasse 8, 1030, Vienna, Austria.
| | - Apeksha Shetty
- Department of Communication, University of Vienna, Vienna, Austria.
- Complexity Science Hub, Metternichgasse 8, 1030, Vienna, Austria.
| | - Corinna Koschmieder
- Institute of Psychology, University of Graz, Graz, Austria
- Center for Research Support, University College for Teacher Education, Graz, Austria
| | - David Garcia
- Complexity Science Hub, Metternichgasse 8, 1030, Vienna, Austria
- Department of Politics and Public Administration, University of Konstanz, Konstanz, Germany
- Institute of Interactive Systems and Data Science, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
| | - Annie Waldherr
- Department of Communication, University of Vienna, Vienna, Austria
| | - Hannah Metzler
- Complexity Science Hub, Metternichgasse 8, 1030, Vienna, Austria
- Center for Medical Data Science, Medical University of Vienna, Vienna, Austria
- Institute for Globally Distributed Open Research and Education, Vienna, Austria
| |
Collapse
|
30
|
DeVerna MR, Yan HY, Yang KC, Menczer F. Fact-checking information from large language models can decrease headline discernment. Proc Natl Acad Sci U S A 2024; 121:e2322823121. [PMID: 39630865 DOI: 10.1073/pnas.2322823121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 11/04/2024] [Indexed: 12/07/2024] Open
Abstract
Fact checking can be an effective strategy against misinformation, but its implementation at scale is impeded by the overwhelming volume of information online. Recent AI language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Here, we investigate the impact of fact-checking information generated by a popular large language model (LLM) on belief in, and sharing intent of, political news headlines in a preregistered randomized control experiment. Although the LLM accurately identifies most false headlines (90%), we find that this information does not significantly improve participants' ability to discern headline accuracy or share accurate news. In contrast, viewing human-generated fact checks enhances discernment in both cases. Subsequent analysis reveals that the AI fact-checker is harmful in specific cases: It decreases beliefs in true headlines that it mislabels as false and increases beliefs in false headlines that it is unsure about. On the positive side, AI fact-checking information increases the sharing intent for correctly labeled true headlines. When participants are given the option to view LLM fact checks and choose to do so, they are significantly more likely to share both true and false news but only more likely to believe false headlines. Our findings highlight an important source of potential harm stemming from AI applications and underscore the critical need for policies to prevent or mitigate such unintended consequences.
Collapse
Affiliation(s)
- Matthew R DeVerna
- Observatory on Social Media, Indiana University, Bloomington, IN 47408
| | - Harry Yaojun Yan
- Observatory on Social Media, Indiana University, Bloomington, IN 47408
- Stanford Social Media Lab, Stanford University, Stanford, CA 94305
| | - Kai-Cheng Yang
- Observatory on Social Media, Indiana University, Bloomington, IN 47408
- Network Science Institute, Northeastern University, Boston, MA 02115
| | - Filippo Menczer
- Observatory on Social Media, Indiana University, Bloomington, IN 47408
| |
Collapse
|
31
|
Tanzer M, Campbell C, Saunders R, Booker T, Luyten P, Fonagy P. The role of epistemic trust and epistemic disruption in vaccine hesitancy, conspiracy thinking and the capacity to identify fake news. PLOS GLOBAL PUBLIC HEALTH 2024; 4:e0003941. [PMID: 39630644 PMCID: PMC11616851 DOI: 10.1371/journal.pgph.0003941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 10/17/2024] [Indexed: 12/07/2024]
Abstract
Epistemic trust - defined as readiness to regard knowledge, communicated by another agent, as significant, relevant to the self, and generalizable to other contexts-has recently been applied to the field of developmental psychopathology as a potential risk factor for psychopathology. The work described here sought to investigate how the vulnerability engendered by disruptions in epistemic trust may not only impact psychological resilience and interpersonal processes but also aspects of more general social functioning. We undertook two studies to examine the role of epistemic trust in determining capacity to recognise fake/real news, and susceptibility to conspiracy thinking-both in general and in relation to COVID-19. Measuring three different epistemic dispositions-trusting, mistrusting and credulous-in two studies (study 1, n = 705; study 2 n = 502), we found that Credulity was associated with inability to discriminate between fake/real news. We also found that both Mistrust and Credulity mediated the relationship between exposure to childhood adversity and difficulty in distinguishing between fake/real news, although the effect sizes were small. Finally, Mistrust and Credulity were associated with general and COVID-19 related conspiracy beliefs and vaccine hesitancy. We discuss the implications of these findings for our understanding of fake news and conspiracy thinking.
Collapse
Affiliation(s)
- Michal Tanzer
- Research Department of Clinical, Educational and Health Psychology, University College London, London, United Kingdom
| | - Chloe Campbell
- Research Department of Clinical, Educational and Health Psychology, University College London, London, United Kingdom
| | - Rob Saunders
- Research Department of Clinical, Educational and Health Psychology, University College London, London, United Kingdom
| | - Thomas Booker
- Research Department of Clinical, Educational and Health Psychology, University College London, London, United Kingdom
| | - Patrick Luyten
- Research Department of Clinical, Educational and Health Psychology, University College London, London, United Kingdom
- Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Peter Fonagy
- Research Department of Clinical, Educational and Health Psychology, University College London, London, United Kingdom
| |
Collapse
|
32
|
Béchard B, Gramaccia JA, Gagnon D, Laouan-Sidi EA, Dubé È, Ouimet M, de Hemptinne D, Tremblay S. The Resilience of Attitude Toward Vaccination: Web-Based Randomized Controlled Trial on the Processing of Misinformation. JMIR Form Res 2024; 8:e52871. [PMID: 39413215 DOI: 10.2196/52871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 09/18/2024] [Accepted: 09/23/2024] [Indexed: 10/18/2024] Open
Abstract
BACKGROUND Before the COVID-19 pandemic, it was already recognized that internet-based misinformation and disinformation could influence individuals to refuse or delay vaccination for themselves, their families, or their children. Reinformation, which refers to hyperpartisan and ideologically biased content, can propagate polarizing messages on vaccines, thereby contributing to vaccine hesitancy even if it is not outright disinformation. OBJECTIVE This study aimed to evaluate the impact of reinformation on vaccine hesitancy. Specifically, the goal was to investigate how misinformation presented in the style and layout of a news article could influence the perceived tentativeness (credibility) of COVID-19 vaccine information and confidence in COVID-19 vaccination. METHODS We conducted a web-based randomized controlled trial by recruiting English-speaking Canadians aged 18 years and older from across Canada through the Qualtrics (Silver Lake) paid opt-in panel system. Participants were randomly assigned to 1 of 4 distinct versions of a news article on COVID-19 vaccines, each featuring variations in writing style and presentation layout. After reading the news article, participants self-assessed the tentativeness of the information provided, their confidence in COVID-19 vaccines, and their attitude toward vaccination in general. RESULTS The survey included 537 participants, with 12 excluded for not meeting the task completion time. The final sample comprised 525 participants distributed about equally across the 4 news article versions. Chi-square analyses revealed a statistically significant association between general attitude toward vaccination and the perceived tentativeness of the information about COVID-19 vaccines included in the news article (χ21=37.8, P<.001). The effect size was small to moderate, with Cramer V=0.27. An interaction was found between vaccine attitude and writing style (χ21=6.2, P=.01), with a small effect size, Cramer V=0.11. In addition, a Pearson correlation revealed a significant moderate to strong correlation between perceived tentativeness and confidence in COVID-19 vaccination, r(523)=0.48, P<.001. The coefficient of determination (r2) was 0.23, indicating that 23% of the variance in perceived tentativeness was explained by confidence in COVID-19 vaccines. In comparing participants exposed to a journalistic-style news article with those exposed to an ideologically biased article, Cohen d was calculated to be 0.38, indicating a small to medium effect size for the difference in the perceived tentativeness between these groups. CONCLUSIONS Exposure to a news article conveying misinformation may not be sufficient to change an individual's level of vaccine hesitancy. The study reveals that the predominant factor in shaping individuals' perceptions of COVID-19 vaccines is their attitude toward vaccination in general. This attitude also moderates the influence of writing style on perceived tentativeness; the stronger one's opposition to vaccines, the less pronounced the impact of writing style on perceived tentativeness. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.2196/41012.
Collapse
Affiliation(s)
- Benoît Béchard
- School of Psychology, Université Laval, Québec, QC, Canada
| | - Julie A Gramaccia
- Department of Communication, University of Ottawa, Ottawa, ON, Canada
| | | | | | - Ève Dubé
- Department of Anthropology, Université Laval, Québec, QC, Canada
| | - Mathieu Ouimet
- Department of Political Science, Université Laval, Québec, QC, Canada
| | | | | |
Collapse
|
33
|
Crum J, Spencer C, Doherty E, Richardson E, Sherman S, Hays AW, Saxena N, Niemeyer RE, Anderson AP, Čeko M, Hirshfield L. Misinformation research needs ecological validity. Nat Hum Behav 2024; 8:2268-2271. [PMID: 39375544 DOI: 10.1038/s41562-024-02015-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/09/2024]
Affiliation(s)
- James Crum
- Institute of Cognitive Science, University of Colorado, Boulder, CO, USA.
| | - Cara Spencer
- Institute of Cognitive Science, University of Colorado, Boulder, CO, USA
- Department of Computer Science, University of Colorado, Boulder, CO, USA
| | - Emily Doherty
- Institute of Cognitive Science, University of Colorado, Boulder, CO, USA
- Department of Computer Science, University of Colorado, Boulder, CO, USA
| | - Erin Richardson
- Smead Aerospace Engineering Sciences, University of Colorado, Boulder, CO, USA
| | - Sage Sherman
- Smead Aerospace Engineering Sciences, University of Colorado, Boulder, CO, USA
| | - Amy W Hays
- Department of Computer Science & Engineering, Texas A&M University, College Station, TX, USA
| | - Nitesh Saxena
- Department of Computer Science & Engineering, Texas A&M University, College Station, TX, USA
| | | | - Allison P Anderson
- Smead Aerospace Engineering Sciences, University of Colorado, Boulder, CO, USA
| | - Marta Čeko
- Institute of Cognitive Science, University of Colorado, Boulder, CO, USA
| | - Leanne Hirshfield
- Institute of Cognitive Science, University of Colorado, Boulder, CO, USA
- Department of Computer Science, University of Colorado, Boulder, CO, USA
| |
Collapse
|
34
|
Pennycook G, Berinsky AJ, Bhargava P, Lin H, Cole R, Goldberg B, Lewandowsky S, Rand DG. Inoculation and accuracy prompting increase accuracy discernment in combination but not alone. Nat Hum Behav 2024; 8:2330-2341. [PMID: 39496772 DOI: 10.1038/s41562-024-02023-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 09/20/2024] [Indexed: 11/06/2024]
Abstract
Misinformation is a major focus of intervention efforts. Psychological inoculation-an intervention intended to help people identify manipulation techniques-is being adopted at scale around the globe. Yet the efficacy of this approach for increasing belief accuracy remains unclear, as prior work uses synthetic materials that do not contain claims of truth. To address this issue, we conducted five studies with 7,286 online participants using a set of news headlines based on real-world true/false content in which we systematically varied the presence or absence of emotional manipulation. Although an emotional manipulation inoculation did help participants identify emotional manipulation, there was no improvement in participants' ability to tell truth from falsehood. However, when the inoculation was paired with an intervention that draws people's attention to accuracy, the combined intervention did successfully improve truth discernment (by increasing belief in true content). These results provide evidence for synergy between popular misinformation interventions.
Collapse
Affiliation(s)
- Gordon Pennycook
- Department of Psychology, Cornell University, Ithaca, NY, USA.
- Hill/Levene Schools of Business, University of Regina, Regina, Saskatchewan, Canada.
| | - Adam J Berinsky
- Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Puneet Bhargava
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
- Department of Marketing, Wharton School, University of Pennsylvania, Philadelphia, PA, USA
| | - Hause Lin
- Hill/Levene Schools of Business, University of Regina, Regina, Saskatchewan, Canada
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | | | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, Bristol, UK
- Department of Psychology, University of Potsdam, Potsdam, Germany
- School of Psychological Science, University of Western Australia, Crawley, Western Australia, Australia
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
35
|
Ward JK, Cortaredona S, Touzet H, Gauna F, Peretti-Watel P. Explaining Political Differences in Attitudes to Vaccines in France: Partisan Cues, Disenchantment with Politics, and Political Sophistication. JOURNAL OF HEALTH POLITICS, POLICY AND LAW 2024; 49:961-988. [PMID: 38836412 DOI: 10.1215/03616878-11373758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2024]
Abstract
CONTEXT The role of political identities in determining attitudes to vaccines has attracted a lot of attention in the last decade. Explanations have tended to focus on the influence of party representatives on their sympathizers (partisan cues). METHODS Four representative samples of the French adult population completed online questionnaires between July 2021 and May 2022 (N = 9,177). Bivariate and multivariate analyses were performed to test whether partisan differences in attitudes to vaccines are best explained by partisan cues or by parties' differences in propensity to attract people who distrust the actors involved in vaccination policies. FINDINGS People who feel close to parties on the far left, parties on the far right, and green parties are more vaccine hesitant. The authors found a small effect of partisan cues and a much stronger effect of trust. More importantly, they show that the more politically sophisticated are less vaccine hesitant and that the nonpartisan are the biggest and most vaccine hesitant group. CONCLUSIONS The literature on vaccine attitudes has focused on the case of the United States, but turning attention toward countries where disenchantment with politics is more marked helps researchers better understand the different ways trust, partisanship, and political sophistication can affect attitudes to vaccines.
Collapse
Affiliation(s)
- Jeremy K Ward
- French National Institute of Health and Medical Research
| | | | - Hugo Touzet
- French National Center for Scientific Resarch
| | | | | |
Collapse
|
36
|
Campbell C, Kumpasoğlu GB, Fonagy P. Mentalizing, Epistemic Trust, and the Active Ingredients of Psychotherapy. Psychodyn Psychiatry 2024; 52:435-451. [PMID: 39679701 DOI: 10.1521/pdps.2024.52.4.435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2024]
Abstract
This article explores the implications of epistemic trust within the mentalizing model of psychopathology and psychotherapy, emphasizing the role of the restoration of epistemic trust in therapeutic settings. At the core of this exploration is the developmental theory of mentalizing, which posits that an individual's ability to understand mental states-both their own and others'-is cultivated through early caregiver interactions. The article expands on this concept by reviewing and integrating evolutionary theories suggesting that humans have evolved a unique sensitivity to teaching and learning through ostensive cues, enhancing our capacity for cultural transmission and cooperation. However, adversities such as trauma or neglect can disrupt this developmental trajectory, leading to epistemic disruption, where individuals struggle to engage with or learn from social experiences effectively. This disruption can manifest in psychological disorders, where mentalizing failures are associated with difficulties in social functioning and in maintaining relationships. The article proposes that psychotherapeutic approaches can effectively address these disruptions, and it outlines three key aspects of communication that unfold in psychotherapeutic interventions. It discusses how the effectiveness of these interventions may hinge on the reestablishment of epistemic trust, enabling patients to reengage with their social environments constructively and adaptively.
Collapse
Affiliation(s)
- Chloe Campbell
- Research Department of Clinical, Educational and Health Psychology, University College London, UK
| | - Güler Beril Kumpasoğlu
- Research Department of Clinical, Educational and Health Psychology, University College London, UK; Department of Psychology, Ankara University, Ankara, Turkey
| | - Peter Fonagy
- Research Department of Clinical, Educational and Health Psychology, University College London, UK
| |
Collapse
|
37
|
Garry M, Chan WM, Foster J, Henkel LA. Large language models (LLMs) and the institutionalization of misinformation. Trends Cogn Sci 2024; 28:1078-1088. [PMID: 39393958 DOI: 10.1016/j.tics.2024.08.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 08/17/2024] [Accepted: 08/24/2024] [Indexed: 10/13/2024]
Abstract
Large language models (LLMs), such as ChatGPT, flood the Internet with true and false information, crafted and delivered with techniques that psychological science suggests will encourage people to think that information is true. What's more, as people feed this misinformation back into the Internet, emerging LLMs will adopt it and feed it back in other models. Such a scenario means we could lose access to information that helps us tell what is real from unreal - to do 'reality monitoring.' If that happens, misinformation will be the new foundation we use to plan, to make decisions, and to vote. We will lose trust in our institutions and each other.
Collapse
Affiliation(s)
- Maryanne Garry
- Psychology, The University of Waikato, Hamilton, New Zealand.
| | - Way Ming Chan
- Psychology, The University of Waikato, Hamilton, New Zealand
| | - Jeffrey Foster
- Cybersecurity Studies, Macquarie University, Sydney, Australia
| | - Linda A Henkel
- Psychological and Brain Sciences, Fairfield University, Fairfield, CT, USA
| |
Collapse
|
38
|
Bell R, Buchner A. Question framing affects accurate-inaccurate discrimination in responses to sharing questions, but not in responses to accuracy questions. Sci Rep 2024; 14:29005. [PMID: 39578568 PMCID: PMC11584869 DOI: 10.1038/s41598-024-80296-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Accepted: 11/18/2024] [Indexed: 11/24/2024] Open
Abstract
Previous research suggests that even when people are capable of judging to the best of their knowledge whether claims are accurate or inaccurate, they do not sufficiently discriminate between accurate and inaccurate information when asked to consider whether they would share stories on social media. However, question framing ("To the best of your knowledge…", "Would you consider…?") differed between the questions about accuracy and the questions about sharing. Here we examine the effects of question framing on responses to accuracy questions and responses to sharing questions. The framing of accuracy questions had no effect on accurate-inaccurate discrimination. In contrast, accurate-inaccurate discrimination in response to sharing questions increased when participants were asked to respond, to the best of their knowledge, whether they would share claims compared to when they were asked whether they would consider sharing stories. At a theoretical level, the findings support the inattention-based account, according to which contextual cues shifting the focus toward accuracy can enhance accurate-inaccurate discrimination in sharing responses. At a methodological level, these findings suggest that researchers should carefully attend to the verbal framing of questions about sharing information on social media, as the framing may significantly influence participants' focus on accuracy.
Collapse
Affiliation(s)
- Raoul Bell
- Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.
| | - Axel Buchner
- Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
39
|
Sultan M, Tump AN, Ehmann N, Lorenz-Spreen P, Hertwig R, Gollwitzer A, Kurvers RHJM. Susceptibility to online misinformation: A systematic meta-analysis of demographic and psychological factors. Proc Natl Acad Sci U S A 2024; 121:e2409329121. [PMID: 39531500 PMCID: PMC11588074 DOI: 10.1073/pnas.2409329121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 09/09/2024] [Indexed: 11/16/2024] Open
Abstract
Nearly five billion people use and receive news through social media and there is widespread concern about the negative consequences of misinformation on social media (e.g., election interference, vaccine hesitancy). Despite a burgeoning body of research on misinformation, it remains largely unclear who is susceptible to misinformation and why. To address this, we conducted a systematic individual participant data meta-analysis covering 256,337 unique choices made by 11,561 US-based participants across 31 experiments. Our meta-analysis reveals the impact of key demographic and psychological factors on online misinformation veracity judgments. We also disentangle the ability to discern between true and false news (discrimination ability) from response bias, that is, the tendency to label news as either true (true-news bias) or false (false-news bias). Across all studies, participants were well above-chance accurate for both true (68.51%) and false (67.24%) news headlines. We find that older age, higher analytical thinking skills, and identifying as a Democrat are associated with higher discrimination ability. Additionally, older age and higher analytical thinking skills are associated with a false-news bias (caution). In contrast, ideological congruency (alignment of participants' ideology with news), motivated reflection (higher analytical thinking skills being associated with a greater congruency effect), and self-reported familiarity with news are associated with a true-news bias (naïvety). We also find that experiments on MTurk show higher discrimination ability than those on Lucid. Displaying sources alongside news headlines is associated with improved discrimination ability, with Republicans benefiting more from source display. Our results provide critical insights that can help inform the design of targeted interventions.
Collapse
Affiliation(s)
- Mubashir Sultan
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
- Department of Psychology, Humboldt University of Berlin, Berlin12489, Germany
| | - Alan N. Tump
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
- Exzellenzcluster Science of Intelligence, Technical University of Berlin, Berlin10587, Germany
| | - Nina Ehmann
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
- Department of Psychology, University of Konstanz, Konstanz78457, Germany
| | - Philipp Lorenz-Spreen
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
- Center Synergy of Systems and Center for Scalable Data Analytics and Artificial Intelligence, TUD Dresden University of Technology, Dresden01069, Germany
| | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
| | - Anton Gollwitzer
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
- Department of Leadership and Organizational Behaviour, BI Norwegian Business School, Oslo0484, Norway
| | - Ralf H. J. M. Kurvers
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
- Exzellenzcluster Science of Intelligence, Technical University of Berlin, Berlin10587, Germany
| |
Collapse
|
40
|
Yao M, Tian S, Zhong W. Readable and neutral? Reliability of crowdsourced misinformation debunking through linguistic and psycholinguistic cues. Front Psychol 2024; 15:1478176. [PMID: 39606202 PMCID: PMC11600022 DOI: 10.3389/fpsyg.2024.1478176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2024] [Accepted: 10/21/2024] [Indexed: 11/29/2024] Open
Abstract
Background In the face of the proliferation of misinformation during the COVID-19 pandemic, crowdsourced debunking has surfaced as a counter-infodemic measure to complement efforts from professionals and regular individuals. In 2021, X (formerly Twitter) initiated its community-driven fact-checking program, named Community Notes (formerly Birdwatch). This program allows users to create contextual and corrective notes for misleading posts and rate the helpfulness of others' contributions. The effectiveness of the platform has been preliminarily verified, but mixed findings on reliability indicate the need for further research. Objective The study aims to assess the reliability of Community Notes by comparing the readability and language neutrality of helpful and unhelpful notes. Methods A total of 7,705 helpful notes and 2,091 unhelpful notes spanning from January 20, 2021, to May 30, 2023 were collected. Measures of reading ease, analytical thinking, affect and authenticity were derived by means of Wordless and Linguistic Inquiry and Word Count (LIWC). Subsequently, the non-parametric Mann-Whitney U-test was employed to evaluate the differences between the helpful and unhelpful groups. Results Both groups of notes are easy to read with no notable difference. Helpful notes show significantly greater logical thinking, authenticity, and emotional restraint than unhelpful ones. As such, the reliability of Community Notes is validated in terms of readability and neutrality. Nevertheless, the prevalence of prepared, negative and swear language in unhelpful notes indicates the manipulative and abusive attempts on the platform. The wide value range in the unhelpful group and overall limited consensus on note helpfulness also suggest the complex information ecology within the crowdsourced platform, highlighting the necessity of further guidance and management. Conclusion Based on the statistical analysis of the linguistic and psycholinguistic characteristics, the study validated the reliability of Community Notes and identified room for improvement. Future endeavors could explore the psychological motivations underlying volunteering, gaming, or even manipulative behaviors, enhance the crowdsourced debunking system and integrate it with broader efforts in infodemic management.
Collapse
Affiliation(s)
- Mengni Yao
- College of Foreign Languages, Nankai University, Tianjin, China
| | - Sha Tian
- School of Foreign Languages, Central South University, Changsha, Hunan, China
| | - Wenming Zhong
- School of Foreign Languages, Central South University, Changsha, Hunan, China
| |
Collapse
|
41
|
Jones DZE, Chandrasekharan E. Measuring Epistemic Trust: Towards a New Lens for Democratic Legitimacy, Misinformation, and Echo Chambers. PROCEEDINGS OF THE ACM ON HUMAN-COMPUTER INTERACTION 2024; 8:1-33. [DOI: 10.1145/3687001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Abstract
Trust is crucial for the functioning of complex societies, and an important concern for CSCW. Our purpose is to use research from philosophy, social science, and CSCW to provide a novel account of trust in the 'post-truth' era. Testimony, from one speaker to another, underlies many social systems. Epistemic trust, or testimonial credibility, is the likelihood to accept a speaker's claim due to beliefs about their competence or sincerity. Epistemic trust is closely related to several 'pathological epistemic phenomena': democratic (il)legitimacy, the spread of misinformation, and echo chambers. To the best of our knowledge, this theoretical contribution is novel in the field of social computing. We further argue that epistemic trust is no philosophical novelty: it is measurable. Weakly supervised text classification approaches achieve F_1 scores of around 80 to 85 per cent on detecting epistemic distrust. This is also, to the best of our knowledge, a novel task in natural language processing. We measure expressions of epistemic distrust across 954 political communities on Reddit. We find that expressions of epistemic distrust are relatively rare, although there are substantial differences between communities. Conspiratorial communities and those focused on controversial political topics tend to express more distrust. Communities with strong epistemic norms enforced by moderation are likely to express low levels. While we find users to be an important potential source of contagion of epistemic distrust, community norms appear to dominate. It is likely that epistemic trust is more useful as an aggregated risk factor. Finally, we argue that policymakers should be aware of epistemic trust considering their reliance on legitimacy underwritten by testimony.
Collapse
|
42
|
Parker VA, Kehoe E, Lees J, Facciani M, Wilson AE. Alluring or Alarming? The Polarizing Effect of Forbidden Knowledge in Political Discourse. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2024:1461672241288332. [PMID: 39503343 DOI: 10.1177/01461672241288332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2024]
Abstract
"Forbidden knowledge" claims are central to conspiracy theories, yet they have received little systematic study. Forbidden knowledge claims imply that information is censored or suppressed. Theoretically, forbidden knowledge could be alluring or alarming, depending on alignment with recipients' political worldviews. In three studies (N = 2363, two preregistered), we examined censorship claims about (conservative-aligned) controversial COVID-19 topics. In Studies 1a and 2 participants read COVID-19 claims framed as censored or not. Conservatives reported more attraction to and belief in the claims, regardless of censorship condition, while liberals showed decreased interest and belief when information was presented as censored. Study 1b revealed divergent interpretations of suppression motives: liberals assumed censored information was harmful or false, whereas conservatives deemed it valuable and true. In Study 2, conservatives made more critical thinking errors in a vaccine risk reasoning task when information was framed as censored. Findings reveal the polarizing effects of forbidden knowledge frames.
Collapse
Affiliation(s)
- V A Parker
- Kellogg School of Management, Northwestern University, Evanston, IL, USA
| | - E Kehoe
- Wilfrid Laurier University, Waterloo, ON, Canada
| | - J Lees
- University of Notre Dame, Notre Dame, IN, USA
| | - M Facciani
- University of Notre Dame, Notre Dame, IN, USA
| | - A E Wilson
- Wilfrid Laurier University, Waterloo, ON, Canada
| |
Collapse
|
43
|
Borah P, Kim SC, Lorenzano K. Misinformation, Risk Perceptions, and Intention to Seek Information About Masks: The Moderating Roles of Gender and Reflective Judgment. HEALTH COMMUNICATION 2024; 39:3195-3210. [PMID: 38299636 DOI: 10.1080/10410236.2024.2309811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2024]
Abstract
The current study has three main purposes: to examine 1) the impact of theory-driven corrective messages using individual vs. collective frames on information-seeking intention 2) the mediating role of risk perceptions and 3) the moderating role of reflection and gender. Our findings from a randomized experimental study and Hayes' moderated, moderated mediation model show collective frames were associated with high-risk perceptions among women, which in turn led to higher information seeking intention. The second moderator reveals that people who scored higher on reflection were more willing to seek information. Our findings have critical implications for misinformation research by demonstrating the importance of theoretically driven messages in understanding misperceptions as well as people's information seeking behavior.
Collapse
Affiliation(s)
- Porismita Borah
- GTZN 224, Edward R. Murrow College of Communication, Washington State University
| | | | - Kyle Lorenzano
- School of Communication, Film, and Media, University of West Georgia
| |
Collapse
|
44
|
Saucier CJ, Ma Z, Montoya JA, Plant A, Suresh S, Robbins CL, Fraser R. Overcoming Health Information Inequities: Valley fever Information Repertoires Among Vulnerable Communities in California. HEALTH COMMUNICATION 2024; 39:2793-2810. [PMID: 38177098 DOI: 10.1080/10410236.2023.2288380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2024]
Abstract
Although Valley fever represents a growing public health challenge for Central and Southern Californian residents, awareness remains severely limited. The California Department of Public Health (CDPH) ran a cross-platform campaign to mitigate this awareness gap and impact prevention behavior. This study evaluates exposure to the CDPH campaign, followed by an examination of the information consumption patterns associated with key health outcomes. Results suggest that the CDPH campaign successfully improved knowledge accuracy, reduced misperceptions, and increased the likelihood of prevention behavior. Using an information repertoire lens revealed a more nuanced account. Most information repertoires positively influenced accurate knowledge retention and prevention behavior compared to those who were not exposed. The most diverse information repertoire, including interpersonal and media channels, was associated with increased knowledge accuracy, affective risk concerns, personal susceptibility, and prevention behavior. However, exposure to this repertoire was also associated with greater misperceptions. In addition, medical professional and radio-based repertoires positively influenced personal susceptibility perceptions. Overall, this research illustrates the importance of examining not only the general outcomes of health campaigns but also the patterns of information acquisition - particularly when working with underserved communities whose health information consumption preferences may not be comprehensively reflected in the literature.
Collapse
Affiliation(s)
| | - Zexin Ma
- Department of Communication, University of Connecticut
| | | | | | - Sapna Suresh
- Department of Medical Social Sciences, Northwestern University
| | - Chris L Robbins
- Department of Medical Social Sciences, Northwestern University
| | | |
Collapse
|
45
|
Lin SY, Chen YC, Chang YH, Lo SH, Chao KM. Text-image multimodal fusion model for enhanced fake news detection. Sci Prog 2024; 107:368504241292685. [PMID: 39440371 PMCID: PMC11500224 DOI: 10.1177/00368504241292685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2024]
Abstract
In the era of rapid internet expansion and technological progress, discerning real from fake news poses a growing challenge, exposing users to potential misinformation. The existing literature primarily focuses on analyzing individual features in fake news, overlooking multimodal feature fusion recognition. Compared to single-modal approaches, multimodal fusion allows for a more comprehensive and enriched capture of information from different data modalities (such as text and images), thereby improving the performance and effectiveness of the model. This study proposes a model using multimodal fusion to identify fake news, aiming to curb misinformation. The framework integrates textual and visual information, using early fusion, joint fusion and late fusion strategies to combine them. The proposed framework processes textual and visual information through data cleaning and feature extraction before classification. Fake news classification is accomplished through a model, achieving accuracy of 85% and 90% in the Gossipcop and Fakeddit datasets, with F1-scores of 90% and 88%, showcasing its performance. The study presents outcomes across different training periods, demonstrating the effectiveness of multimodal fusion in combining text and image recognition for combating fake news. This research contributes significantly to addressing the critical issue of misinformation, emphasizing a comprehensive approach for detection accuracy enhancement.
Collapse
Affiliation(s)
- Szu-Yin Lin
- Department of Management Science, National Yang Ming Chiao Tung University, Hsinchu
| | - Yen-Chiu Chen
- Department of Information Management, Chung Hua University, Hsinchu
| | - Yu-Han Chang
- Department of Computer Science and Information Engineering, National Ilan University, Yilan
| | - Shih-Hsin Lo
- Department of Computer Science and Information Engineering, National Ilan University, Yilan
| | | |
Collapse
|
46
|
Stavrova O, Kleinberg B, Evans AM, Ivanović M. Expressions of uncertainty in online science communication hinder information diffusion. PNAS NEXUS 2024; 3:pgae439. [PMID: 39430220 PMCID: PMC11489878 DOI: 10.1093/pnasnexus/pgae439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 09/18/2024] [Indexed: 10/22/2024]
Abstract
Despite the importance of transparent communication of uncertainty surrounding scientific findings, there are concerns that communicating uncertainty might damage the public perception and dissemination of science. Yet, a lack of empirical research on the potential impact of uncertainty communication on the diffusion of scientific findings poses challenges in assessing such claims. We studied the effect of uncertainty in a field study and a controlled experiment. In Study 1, a natural language processing analysis of over 2 million social media (Twitter/X) messages about scientific findings revealed that more uncertain messages were shared less often. Study 2 replicated this pattern using an experimental design where participants were presented with large-language-model (LLM)-generated high- and low-uncertainty messages. These results underscore the role of uncertainty in the dissemination of scientific findings and inform the ongoing debates regarding the benefits and the risks of uncertainty in science communication.
Collapse
Affiliation(s)
- Olga Stavrova
- Department of Psychology, University of Lübeck, Maria-Goeppert-Str. 9a, 23562, Lübeck, Schleswig-Holstein, Germany
- Department of Social Psychology, Tilburg University, Tilburg, North-Brabant, 5000 LE, The Netherlands
| | - Bennett Kleinberg
- Department of Methodology and Statistics, Tilburg University, Tilburg, North-Brabant, 5000 LE, The Netherlands
- Department of Security and Crime Science, University College London, London WC1E 6BT, United Kingdom
| | - Anthony M Evans
- Allstate Corporation, Claims Strategic Design, Chicago, IL 60605, USA
| | | |
Collapse
|
47
|
Marin PM, Lindeman M, Svedholm-Häkkinen AM. Susceptibility to poor arguments: The interplay of cognitive sophistication and attitudes. Mem Cognit 2024; 52:1579-1596. [PMID: 38656632 PMCID: PMC11522166 DOI: 10.3758/s13421-024-01564-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/20/2024] [Indexed: 04/26/2024]
Abstract
Despite everyday argumentation being crucial to human communication and decision-making, the cognitive determinants of argument evaluation are poorly known. This study examined how attitudes and aspects of cognitive sophistication, i.e., thinking styles and scientific literacy, relate to people's acceptance of poorly justified arguments (e.g., unwarranted appeals to naturalness) on controversial topics (e.g., genetically modified organisms (GMOs)). The participants were more accepting of poorly justified arguments that aligned with their attitudes compared to those that opposed their attitudes, and this was true regardless of one's thinking styles or level of scientific literacy. Still, most of the examined aspects of cognitive sophistication were also positively related to fallacy detection. The strongest cognitive predictors of correctly recognizing the fallacies were one's scientific reasoning ability and active open-mindedness. The results thus imply that decreasing misleading attitude effects, and increasing certain aspects of analytic and scientific thinking, could improve argumentation.
Collapse
Affiliation(s)
- Pinja M Marin
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, P.O. Box 21, 00014, Helsinki, Finland.
| | - Marjaana Lindeman
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, P.O. Box 21, 00014, Helsinki, Finland
| | - Annika M Svedholm-Häkkinen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, P.O. Box 21, 00014, Helsinki, Finland
- Tampere Institute for Advanced Study, Tampere University, Tampere, Finland
| |
Collapse
|
48
|
Martel C, Rand DG. Fact-checker warning labels are effective even for those who distrust fact-checkers. Nat Hum Behav 2024; 8:1957-1967. [PMID: 39223352 DOI: 10.1038/s41562-024-01973-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 08/01/2024] [Indexed: 09/04/2024]
Abstract
Warning labels from professional fact-checkers are one of the most widely used interventions against online misinformation. But are fact-checker warning labels effective for those who distrust fact-checkers? Here, in a first correlational study (N = 1,000), we validate a measure of trust in fact-checkers. Next, we conduct meta-analyses across 21 experiments (total N = 14,133) in which participants evaluated true and false news posts and were randomized to either see no warning labels or to see warning labels on a high proportion of the false posts. Warning labels were on average effective at reducing belief in (27.6% reduction), and sharing of (24.7% reduction), false headlines. While warning effects were smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in (12.9% reduction), and sharing of (16.7% reduction), false news even for those most distrusting of fact-checkers. These results suggest that fact-checker warning labels are a broadly effective tool for combatting misinformation.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
49
|
Mosleh M, Yang Q, Zaman T, Pennycook G, Rand DG. Differences in misinformation sharing can lead to politically asymmetric sanctions. Nature 2024; 634:609-616. [PMID: 39358507 PMCID: PMC11485227 DOI: 10.1038/s41586-024-07942-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 08/13/2024] [Indexed: 10/04/2024]
Abstract
In response to intense pressure, technology companies have enacted policies to combat misinformation1-4. The enforcement of these policies has, however, led to technology companies being regularly accused of political bias5-7. We argue that differential sharing of misinformation by people identifying with different political groups8-15 could lead to political asymmetries in enforcement, even by unbiased policies. We first analysed 9,000 politically active Twitter users during the US 2020 presidential election. Although users estimated to be pro-Trump/conservative were indeed substantially more likely to be suspended than those estimated to be pro-Biden/liberal, users who were pro-Trump/conservative also shared far more links to various sets of low-quality news sites-even when news quality was determined by politically balanced groups of laypeople, or groups of only Republican laypeople-and had higher estimated likelihoods of being bots. We find similar associations between stated or inferred conservatism and low-quality news sharing (on the basis of both expert and politically balanced layperson ratings) in 7 other datasets of sharing from Twitter, Facebook and survey experiments, spanning 2016 to 2023 and including data from 16 different countries. Thus, even under politically neutral anti-misinformation policies, political asymmetries in enforcement should be expected. Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation policies.
Collapse
Affiliation(s)
- Mohsen Mosleh
- Oxford Internet Institute, University of Oxford, Oxford, UK
- Management Department, University of Exeter Business School, Exeter, UK
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Qi Yang
- Initiative on the Digital Economy, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tauhid Zaman
- Yale School of Management, Yale University, New Haven, CT, USA
| | | | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Initiative on the Digital Economy, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
50
|
List JA, Ramirez LM, Seither J, Unda J, Vallejo BH. Critical thinking and misinformation vulnerability: experimental evidence from Colombia. PNAS NEXUS 2024; 3:pgae361. [PMID: 39411082 PMCID: PMC11475746 DOI: 10.1093/pnasnexus/pgae361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 08/07/2024] [Indexed: 10/19/2024]
Abstract
Misinformation represents a vital threat to the societal fabric of modern economies. While skills interventions to detect misinformation such as de-bunking and prebunking, media literacy, and manipulation resilience have begun to receive increased attention, evidence on de-biasing interventions and their link with misinformation vulnerability is scarce. We explore the demand for misinformation through the lens of augmenting critical thinking in an online framed field experiment during the 2022 Presidential election in Colombia. Data from roughly 2.000 individuals suggest that providing individuals with information about their own biases (obtained through a personality test) has no impact on skepticism towards news. But (additionally) showing participants a de-biasing video seems to enhance critical thinking, causing subjects to more carefully consider the truthfulness of potential misinformation.
Collapse
Affiliation(s)
- John A List
- Department of Economics, University of Chicago, Chicago, IL 60637, USA
| | - Lina M Ramirez
- Department of Economics, University of Chicago, Chicago, IL 60637, USA
| | - Julia Seither
- Department of Economics, Universidad del Rosario, Bogota, Cundinamarca 111711, Colombia
| | - Jaime Unda
- Ethos Behavioral Team, Bogota, Cundinamarca 110111, Colombia
| | | |
Collapse
|