1
|
Tomassi A, Falegnami A, Romano E. Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society. PLoS One 2024; 19:e0303183. [PMID: 38820281 PMCID: PMC11142451 DOI: 10.1371/journal.pone.0303183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 04/19/2024] [Indexed: 06/02/2024] Open
Abstract
This paper presents an analysis on information disorder in social media platforms. The study employed methods such as Natural Language Processing, Topic Modeling, and Knowledge Graph building to gain new insights into the phenomenon of fake news and its impact on critical thinking and knowledge management. The analysis focused on four research questions: 1) the distribution of misinformation, disinformation, and malinformation across different platforms; 2) recurring themes in fake news and their visibility; 3) the role of artificial intelligence as an authoritative and/or spreader agent; and 4) strategies for combating information disorder. The role of AI was highlighted, both as a tool for fact-checking and building truthiness identification bots, and as a potential amplifier of false narratives. Strategies proposed for combating information disorder include improving digital literacy skills and promoting critical thinking among social media users.
Collapse
Affiliation(s)
- Andrea Tomassi
- Engineering Faculty, Uninettuno International Telematic University, Rome, Italy
| | - Andrea Falegnami
- Engineering Faculty, Uninettuno International Telematic University, Rome, Italy
| | - Elpidio Romano
- Engineering Faculty, Uninettuno International Telematic University, Rome, Italy
| |
Collapse
|
2
|
Pretus C, Javeed AM, Hughes D, Hackenburg K, Tsakiris M, Vilarroya O, Van Bavel JJ. The Misleading count: an identity-based intervention to counter partisan misinformation sharing. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230040. [PMID: 38244594 PMCID: PMC10799730 DOI: 10.1098/rstb.2023.0040] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 07/25/2023] [Indexed: 01/22/2024] Open
Abstract
Interventions to counter misinformation are often less effective for polarizing content on social media platforms. We sought to overcome this limitation by testing an identity-based intervention, which aims to promote accuracy by incorporating normative cues directly into the social media user interface. Across three pre-registered experiments in the US (N = 1709) and UK (N = 804), we found that crowdsourcing accuracy judgements by adding a Misleading count (next to the Like count) reduced participants' reported likelihood to share inaccurate information about partisan issues by 25% (compared with a control condition). The Misleading count was also more effective when it reflected in-group norms (from fellow Democrats/Republicans) compared with the norms of general users, though this effect was absent in a less politically polarized context (UK). Moreover, the normative intervention was roughly five times as effective as another popular misinformation intervention (i.e. the accuracy nudge reduced sharing misinformation by 5%). Extreme partisanship did not undermine the effectiveness of the intervention. Our results suggest that identity-based interventions based on the science of social norms can be more effective than identity-neutral alternatives to counter partisan misinformation in politically polarized contexts (e.g. the US). This article is part of the theme issue 'Social norm change: drivers and consequences'.
Collapse
Affiliation(s)
- Clara Pretus
- Department of Psychobiology and Methodology of Health Sciences, Universitat Autònoma de Barcelona, 08193 Barcelona, Spain
- Center of Conflict Studies and Field Research, ARTIS International, St Michaels, MD 21663, USA
| | - Ali M. Javeed
- Department of Psychology and Center for Neural Science, New York University, New York, NY 10003, USA
| | - Diána Hughes
- Department of Psychology and Center for Neural Science, New York University, New York, NY 10003, USA
| | - Kobi Hackenburg
- Centre for the Politics of Feelings, School of Advanced Study, Royal Holloway, University of London, London WC1E 7HU, UK
| | - Manos Tsakiris
- Centre for the Politics of Feelings, School of Advanced Study, Royal Holloway, University of London, London WC1E 7HU, UK
- Department of Psychology, Royal Holloway, University of London, Egham, Surrey TW20 0EX, UK
| | - Oscar Vilarroya
- Department of Psychiatry and Forensic Medicine, Universitat Autònoma de Barcelona, 08193 Barcelona, Spain
| | - Jay J. Van Bavel
- Department of Psychology and Center for Neural Science, New York University, New York, NY 10003, USA
| |
Collapse
|
3
|
Van Bavel JJ, Pretus C, Rathje S, Pärnamets P, Vlasceanu M, Knowles ED. The Costs of Polarizing a Pandemic: Antecedents, Consequences, and Lessons. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023:17456916231190395. [PMID: 37811599 DOI: 10.1177/17456916231190395] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Polarization has been rising in the United States of America for the past few decades and now poses a significant-and growing-public-health risk. One of the signature features of the American response to the COVID-19 pandemic has been the degree to which perceptions of risk and willingness to follow public-health recommendations have been politically polarized. Although COVID-19 has proven more lethal than any war or public-health crisis in American history, the deadly consequences of the pandemic were exacerbated by polarization. We review research detailing how every phase of the COVID-19 pandemic has been polarized, including judgments of risk, spatial distancing, mask wearing, and vaccination. We describe the role of political ideology, partisan identity, leadership, misinformation, and mass communication in this public-health crisis. We then assess the overall impact of polarization on infections, illness, and mortality during the pandemic; offer a psychological analysis of key policy questions; and identify a set of future research questions for scholars and policy experts. Our analysis suggests that the catastrophic death toll in the United States was largely preventable and due, in large part, to the polarization of the pandemic. Finally, we discuss implications for public policy to help avoid the same deadly mistakes in future public-health crises.
Collapse
Affiliation(s)
- Jay J Van Bavel
- Department of Psychology and Center for Neural Science, New York University
- Department of Strategy and Management, Norwegian School of Economics
| | - Clara Pretus
- Neuroscience Program, Hospital del Mar Research Institute, Barcelona, Spain
| | | | | | | | | |
Collapse
|
4
|
Chan MPS, Albarracín D. A meta-analysis of correction effects in science-relevant misinformation. Nat Hum Behav 2023; 7:1514-1525. [PMID: 37322236 DOI: 10.1038/s41562-023-01623-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 05/09/2023] [Indexed: 06/17/2023]
Abstract
Scientifically relevant misinformation, defined as false claims concerning a scientific measurement procedure or scientific evidence, regardless of the author's intent, is illustrated by the fiction that the coronavirus disease 2019 vaccine contained microchips to track citizens. Updating science-relevant misinformation after a correction can be challenging, and little is known about what theoretical factors can influence the correction. Here this meta-analysis examined 205 effect sizes (that is, k, obtained from 74 reports; N = 60,861), which showed that attempts to debunk science-relevant misinformation were, on average, not successful (d = 0.19, P = 0.131, 95% confidence interval -0.06 to 0.43). However, corrections were more successful when the initial science-relevant belief concerned negative topics and domains other than health. Corrections fared better when they were detailed, when recipients were likely familiar with both sides of the issue ahead of the study and when the issue was not politically polarized.
Collapse
Affiliation(s)
- Man-Pui Sally Chan
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA, USA.
| | - Dolores Albarracín
- Annenberg School for Communication, Annenberg Public Policy Center, School of Arts and Sciences, School of Nursing, Wharton School, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
5
|
Rathje S, Roozenbeek J, Van Bavel JJ, van der Linden S. Accuracy and social motivations shape judgements of (mis)information. Nat Hum Behav 2023:10.1038/s41562-023-01540-w. [PMID: 36879042 DOI: 10.1038/s41562-023-01540-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 01/27/2023] [Indexed: 03/08/2023]
Abstract
The extent to which belief in (mis)information reflects lack of knowledge versus a lack of motivation to be accurate is unclear. Here, across four experiments (n = 3,364), we motivated US participants to be accurate by providing financial incentives for correct responses about the veracity of true and false political news headlines. Financial incentives improved accuracy and reduced partisan bias in judgements of headlines by about 30%, primarily by increasing the perceived accuracy of true news from the opposing party (d = 0.47). Incentivizing people to identify news that would be liked by their political allies, however, decreased accuracy. Replicating prior work, conservatives were less accurate at discerning true from false headlines than liberals, yet incentives closed the gap in accuracy between conservatives and liberals by 52%. A non-financial accuracy motivation intervention was also effective, suggesting that motivation-based interventions are scalable. Altogether, these results suggest that a substantial portion of people's judgements of the accuracy of news reflects motivational factors.
Collapse
Affiliation(s)
- Steve Rathje
- Department of Psychology, University of Cambridge, Cambridge, UK.
| | - Jon Roozenbeek
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Jay J Van Bavel
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
| | | |
Collapse
|
6
|
Abstract
Why do people share misinformation on social media? In this research (N = 2,476), we show that the structure of online sharing built into social platforms is more important than individual deficits in critical reasoning and partisan bias-commonly cited drivers of misinformation. Due to the reward-based learning systems on social media, users form habits of sharing information that attracts others' attention. Once habits form, information sharing is automatically activated by cues on the platform without users considering response outcomes such as spreading misinformation. As a result of user habits, 30 to 40% of the false news shared in our research was due to the 15% most habitual news sharers. Suggesting that sharing of false news is part of a broader response pattern established by social media platforms, habitual users also shared information that challenged their own political beliefs. Finally, we show that sharing of false news is not an inevitable consequence of user habits: Social media sites could be restructured to build habits to share accurate information.
Collapse
|
7
|
Hungarian, lazy, and biased: the role of analytic thinking and partisanship in fake news discernment on a Hungarian representative sample. Sci Rep 2023; 13:178. [PMID: 36604448 PMCID: PMC9813452 DOI: 10.1038/s41598-022-26724-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Accepted: 12/19/2022] [Indexed: 01/06/2023] Open
Abstract
"Why do people believe blatantly inaccurate news headlines? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology?" These were the questions of Pennycook and Rand (2019), and they are more than actual three years later in Eastern Europe (especially in Hungary) in the light of the rise of populism, and the ongoing war in Ukraine - with the flood of disinformation that follows. In this study, using a representative Hungarian sample (N = 991) we wanted to answer the same questions-moving one step forward and investigating alternative models. We aimed to extend the original research with the examination of digital literacy and source salience on media truth discernment. Most of the observations of Pennycook and Rand were confirmed: people with higher analytic thinking were better at discerning disinformation. However, the results are in line with the synergistic integrative model as partisanship interacted with cognitive reflection: anti-government voters used their analytic capacities to question both concordant and discordant fake news more than pro-government voters. Furthermore, digital literacy increased detection, but source salience did not matter when perceiving disinformation.
Collapse
|
8
|
Sultan M, Tump AN, Geers M, Lorenz-Spreen P, Herzog SM, Kurvers RHJM. Time pressure reduces misinformation discrimination ability but does not alter response bias. Sci Rep 2022; 12:22416. [PMID: 36575232 PMCID: PMC9794823 DOI: 10.1038/s41598-022-26209-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 12/09/2022] [Indexed: 12/28/2022] Open
Abstract
Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people's ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants' ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation.
Collapse
Affiliation(s)
- Mubashir Sultan
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany.
- Department of Psychology, Humboldt University of Berlin, 12489, Berlin, Germany.
| | - Alan N Tump
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
- Technical University of Berlin, Exzellenzcluster Science of Intelligence, Berlin, 10587, Germany
| | - Michael Geers
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
- Department of Psychology, Humboldt University of Berlin, 12489, Berlin, Germany
| | - Philipp Lorenz-Spreen
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
| | - Stefan M Herzog
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
| | - Ralf H J M Kurvers
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
- Technical University of Berlin, Exzellenzcluster Science of Intelligence, Berlin, 10587, Germany
| |
Collapse
|
9
|
Pennycook G. A framework for understanding reasoning errors: From fake news to climate change and beyond. ADVANCES IN EXPERIMENTAL SOCIAL PSYCHOLOGY 2022. [DOI: 10.1016/bs.aesp.2022.11.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|