1
|
Budak C, Nyhan B, Rothschild DM, Thorson E, Watts DJ. Misunderstanding the harms of online misinformation. Nature 2024; 630:45-53. [PMID: 38840013 DOI: 10.1038/s41586-024-07417-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 04/11/2024] [Indexed: 06/07/2024]
Abstract
The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization. In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information. In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe.
Collapse
Affiliation(s)
- Ceren Budak
- University of Michigan School of Information, Ann Arbor, MI, USA
| | - Brendan Nyhan
- Department of Government, Dartmouth College, Hanover, NH, USA
| | | | - Emily Thorson
- Maxwell School of Citizenship and Public Affairs, Syracuse University, Syracuse, NY, USA
| | - Duncan J Watts
- Department of Computer and Information Science, Annenberg School of Communication, and Operations, Information, and Decisions Department, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
2
|
Baribi-Bartov S, Swire-Thompson B, Grinberg N. Supersharers of fake news on Twitter. Science 2024; 384:979-982. [PMID: 38815033 DOI: 10.1126/science.adl4435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/19/2024] [Indexed: 06/01/2024]
Abstract
Governments may have the capacity to flood social media with fake news, but little is known about the use of flooding by ordinary voters. In this work, we identify 2107 registered US voters who account for 80% of the fake news shared on Twitter during the 2020 US presidential election by an entire panel of 664,391 voters. We found that supersharers were important members of the network, reaching a sizable 5.2% of registered voters on the platform. Supersharers had a significant overrepresentation of women, older adults, and registered Republicans. Supersharers' massive volume did not seem automated but was rather generated through manual and persistent retweeting. These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many.
Collapse
Affiliation(s)
- Sahar Baribi-Bartov
- Software and Information Systems Engineering, Ben-Gurion University, Be'er Sheva, Israel
| | - Briony Swire-Thompson
- Network Science Institute, Department of Political Science, Department of Psychology, Northeastern University, Boston, MA, USA
| | - Nir Grinberg
- Software and Information Systems Engineering, Ben-Gurion University, Be'er Sheva, Israel
| |
Collapse
|
3
|
Ezzeddine F, Ayoub O, Giordano S, Nogara G, Sbeity I, Ferrara E, Luceri L. Exposing influence campaigns in the age of LLMs: a behavioral-based AI approach to detecting state-sponsored trolls. EPJ DATA SCIENCE 2023; 12:46. [PMID: 37822355 PMCID: PMC10562499 DOI: 10.1140/epjds/s13688-023-00423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 10/01/2023] [Indexed: 10/13/2023]
Abstract
The detection of state-sponsored trolls operating in influence campaigns on social media is a critical and unsolved challenge for the research community, which has significant implications beyond the online realm. To address this challenge, we propose a new AI-based solution that identifies troll accounts solely through behavioral cues associated with their sequences of sharing activity, encompassing both their actions and the feedback they receive from others. Our approach does not incorporate any textual content shared and consists of two steps: First, we leverage an LSTM-based classifier to determine whether account sequences belong to a state-sponsored troll or an organic, legitimate user. Second, we employ the classified sequences to calculate a metric named the "Troll Score", quantifying the degree to which an account exhibits troll-like behavior. To assess the effectiveness of our method, we examine its performance in the context of the 2016 Russian interference campaign during the U.S. Presidential election. Our experiments yield compelling results, demonstrating that our approach can identify account sequences with an AUC close to 99% and accurately differentiate between Russian trolls and organic users with an AUC of 91%. Notably, our behavioral-based approach holds a significant advantage in the ever-evolving landscape, where textual and linguistic properties can be easily mimicked by Large Language Models (LLMs): In contrast to existing language-based techniques, it relies on more challenging-to-replicate behavioral cues, ensuring greater resilience in identifying influence campaigns, especially given the potential increase in the usage of LLMs for generating inauthentic content. Finally, we assessed the generalizability of our solution to various entities driving different information operations and found promising results that will guide future research.
Collapse
Affiliation(s)
- Fatima Ezzeddine
- Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
- Department of Applied Mathematics, Faculty of Science, Lebanese University, Beirut, Lebanon
| | - Omran Ayoub
- Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
| | - Silvia Giordano
- Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
| | - Gianluca Nogara
- Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
| | - Ihab Sbeity
- Department of Applied Mathematics, Faculty of Science, Lebanese University, Beirut, Lebanon
| | - Emilio Ferrara
- Information Sciences Institute, Viterbi School of Engineering, University of Southern California, Marina del Rey, CA USA
| | - Luca Luceri
- Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
- Information Sciences Institute, Viterbi School of Engineering, University of Southern California, Marina del Rey, CA USA
| |
Collapse
|
4
|
Schweitzer S, Dobson KSH, Waytz A. Political Bot Bias in the Perception of Online Discourse. SOCIAL PSYCHOLOGICAL AND PERSONALITY SCIENCE 2023. [DOI: 10.1177/19485506231156020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023]
Abstract
Four nationally representative studies ( N = 1,986; three preregistered) find evidence for a bias in how people perceive opposing viewpoints expressed through online discourse. These studies elucidate a political bot bias, where political partisans (vs. their out-party) are more likely to view counter-ideological (vs. ideologically consistent) tweets to be social media bots (vs. humans). Study 1 demonstrates that American Democrats and Republicans are more likely to attribute tweets to bots when those tweets express counter-ideological views. Study 2 demonstrated this bias with actual bot tweets generated by the Russian government and comparable human tweets. Study 3 demonstrated this bias manifests in the context of real recent elections and is associated with markers of political animosity. Study 4 experimentally demonstrates the consequences of bot attribution for perceptions of online political discourse. Our findings document a consistent bias that has implications for political discussion online and political polarization more broadly.
Collapse
Affiliation(s)
| | | | - Adam Waytz
- Northwestern University, Evanston, IL, USA
| |
Collapse
|
5
|
How disinformation operations against Russian opposition leader Alexei Navalny influence the international audience on Twitter. SOCIAL NETWORK ANALYSIS AND MINING 2022; 12:80. [PMID: 35855844 PMCID: PMC9281276 DOI: 10.1007/s13278-022-00908-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 06/14/2022] [Accepted: 06/16/2022] [Indexed: 11/12/2022]
Abstract
Previous research dedicated a lot of effort to investigation of the activities of the Internet Research Agency, a Russia-based troll factory, as well as other information operations. However, those studies are mostly focused on the 2016 U.S. presidential election, Brexit, and other major international political events. In this study, we have attempted to analyze how narratives about a domestic issue in Russia are used by malicious actors to promote harmful discourses globally and persuade an international audience on Twitter. We have identified bot and troll activities related to the Twitter discussions of a Russian opposition leader Alexei Navalny using social network analysis and bot detection. We have also implemented the BEND framework to find persuasion maneuvers that are used by bots in conversations about Navalny and found attempts to manipulate the opinion of the international audience on Twitter. Our findings have demonstrated that there is a significant presence of bot activities in information operations against Alexei Navalny as one of the leaders of the Russian opposition. We have observed how the Russian domestic issue is framed in the context of Russian confrontation with the West and how it is used to promote hostile narratives either against Navalny, an opposition movement, or democratic values. Many agents that we have identified pretend to be English speakers, who exhibit hostile attitudes towards Navalny and the Western democracies, express skepticism and distort the facts, promote a lack of trust in the democratic institutions as well as spread disinformation and conspiracy theories.
Collapse
|
6
|
Pastor-Galindo J, Gómez Mármol F, Martínez Pérez G. Profiling users and bots in Twitter through social media analysis. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.09.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
7
|
Shafiei H, Dadlani A. Detection of fickle trolls in large-scale online social networks. JOURNAL OF BIG DATA 2022; 9:22. [PMID: 35223368 PMCID: PMC8857750 DOI: 10.1186/s40537-022-00572-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 01/30/2022] [Indexed: 06/14/2023]
Abstract
Online social networks have attracted billions of active users over the past decade. These systems play an integral role in the everyday life of many people around the world. As such, these platforms are also attractive for misinformation, hoaxes, and fake news campaigns which usually utilize social trolls and/or social bots for propagation. Detection of so-called social trolls in these platforms is challenging due to their large scale and dynamic nature where users' data are generated and collected at the scale of multi-billion records per hour. In this paper, we focus on fickle trolls, i.e., a special type of trolling activity in which the trolls change their identity frequently to maximize their social relations. This kind of trolling activity may become irritating for the users and also may pose a serious threat to their privacy. To the best of our knowledge, this is the first work that introduces mechanisms to detect these trolls. In particular, we discuss and analyze troll detection mechanisms on different scales. We prove that the order of centralized single-machine detection algorithm is O ( n 3 ) which is slow and impractical for early troll detection in large-scale social platforms comprising of billions of users. We also prove that the streaming approach where data is gradually fed to the system is not practical in many real-world scenarios. In light of such shortcomings, we then propose a massively parallel detection approach. Rigorous evaluations confirm that our proposed method is at least six times faster compared to conventional parallel approaches.
Collapse
Affiliation(s)
- Hossein Shafiei
- Faculty of Computer Engineering, K. N. Toosi University, Tehran, Iran
| | - Aresh Dadlani
- School of Engineering and Digital Sciences, Nazarbayev University, Nur-Sultan, Kazakhstan
| |
Collapse
|
8
|
A deep dive into COVID-19-related messages on WhatsApp in Pakistan. SOCIAL NETWORK ANALYSIS AND MINING 2022; 12:5. [PMID: 34804253 PMCID: PMC8590927 DOI: 10.1007/s13278-021-00833-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/02/2021] [Accepted: 10/05/2021] [Indexed: 01/04/2023]
Abstract
The spread of COVID-19 and the lockdowns that followed led to an increase in activity on online social networks. This has resulted in users sharing unfiltered and unreliable information on social networks like WhatsApp, Twitter, Facebook, etc. In this work, we give an extended overview of how Pakistan's population used public WhatsApp groups for sharing information related to the pandemic. Our work is based on a major effort to annotate thousands of text and image-based messages. We explore how information propagates across WhatsApp and the user behavior around it. Specifically, we look at political polarization and its impact on how users from different political parties shared COVID-19-related content. We also try to understand information dissemination across different social networks-Twitter and WhatsApp-in Pakistan and find that there is no significant bot involvement in spreading misinformation about the pandemic.
Collapse
|
9
|
Hybrid Intelligence Strategies for Identifying, Classifying and Analyzing Political Bots. SOCIAL SCIENCES 2021. [DOI: 10.3390/socsci10100357] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Political bots, through astroturfing and other strategies, have become important players in recent elections in several countries. This study aims to provide researchers and the citizenry with the necessary knowledge to design strategies to identify bots and counteract what international organizations have deemed bots’ harmful effects on democracy and, simultaneously, improve automatic detection of them. This study is based on two innovative methodological approaches: (1) dealing with bots using hybrid intelligence (HI), a multidisciplinary perspective that combines artificial intelligence (AI), natural language processing, political science, and communication science, and (2) applying framing theory to political bots. This paper contributes to the literature in the field by (a) applying framing to the analysis of political bots, (b) defining characteristics to identify signs of automation in Spanish, (c) building a Spanish-language bot database, (d) developing a specific classifier for Spanish-language accounts, (e) using HI to detect bots, and (f) developing tools that enable the everyday citizen to identify political bots through framing.
Collapse
|
10
|
Pastor-Galindo J, Zago M, Nespoli P, Bernal SL, Celdran AH, Perez MG, Ruiperez-Valiente JA, Perez GM, Marmol FG. Spotting Political Social Bots in Twitter: A Use Case of the 2019 Spanish General Election. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT 2020. [DOI: 10.1109/tnsm.2020.3031573] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
11
|
Moreira RCN, Vaz-de-Melo POS, Pappa GL. Elite versus mass polarization on the Brazilian impeachment proceedings of 2016. SOCIAL NETWORK ANALYSIS AND MINING 2020. [DOI: 10.1007/s13278-020-00706-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
Abstract
A troll is usually defined as somebody who provokes and offends people to make them angry, who wants to dominate any discussion or who tries to manipulate people’s opinions. The problems caused by such persons have increased with the diffusion of social media. Therefore, on the one hand, press bodies and magazines have begun to address the issue and to write articles about the phenomenon and its related problems while, on the other hand, universities and research centres have begun to study the features characterizing trolls and to look for solutions for their identification. This survey aims at introducing the main researches dedicated to the description of trolls and to the study and experimentation of methods for their detection.
Collapse
|