1
|
Spence R, Bifulco A, Bradbury P, Martellozzo E, DeMarco J. Content Moderator Mental Health, Secondary Trauma, and Well-being: A Cross-Sectional Study. Cyberpsychol Behav Soc Netw 2024; 27:149-155. [PMID: 38153846 DOI: 10.1089/cyber.2023.0298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2023]
Abstract
Content moderators (CMs) analyze and remove offensive or harmful user generated content that has been uploaded to the internet. Jobs which involve exposure to other people's suffering are associated with raised rates of secondary traumatic stress and mental health problems. However, research establishing psychological baseline symptoms in CMs is lacking. This study used an online survey to explore rates of psychological distress, secondary trauma, and well-being in a sample of CMs. Regression analysis explored how various features of the work affected mental health. There was a dose-response effect between frequency of exposure to distressing content and psychological distress and secondary trauma, but not well-being. The results suggested supportive colleagues and feedback about the importance of their role ameliorated this relationship. Implications for CM working conditions are discussed.
Collapse
Affiliation(s)
- Ruth Spence
- Department of Psychology, Centre for Abuse and Trauma Studies, Middlesex University, London, United Kingdom
| | - Antonia Bifulco
- Department of Psychology, Centre for Abuse and Trauma Studies, Middlesex University, London, United Kingdom
| | - Paula Bradbury
- Department of Criminology, Centre for Abuse and Trauma Studies, Middlesex University, London, United Kingdom
| | - Elena Martellozzo
- Department of Criminology, Centre for Abuse and Trauma Studies, Middlesex University, London, United Kingdom
| | - Jeffrey DeMarco
- Department of Psychology, Centre for Abuse and Trauma Studies, Middlesex University, London, United Kingdom
| |
Collapse
|
2
|
Monti C, Cinelli M, Valensise C, Quattrociocchi W, Starnini M. Online conspiracy communities are more resilient to deplatforming. PNAS Nexus 2023; 2:pgad324. [PMID: 37920549 PMCID: PMC10619511 DOI: 10.1093/pnasnexus/pgad324] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 09/19/2023] [Indexed: 11/04/2023]
Abstract
Online social media foster the creation of active communities around shared narratives. Such communities may turn into incubators for conspiracy theories-some spreading violent messages that could sharpen the debate and potentially harm society. To face these phenomena, most social media platforms implemented moderation policies, ranging from posting warning labels up to deplatforming, i.e. permanently banning users. Assessing the effectiveness of content moderation is crucial for balancing societal safety while preserving the right to free speech. In this article, we compare the shift in behavior of users affected by the ban of two large communities on Reddit, GreatAwakening and FatPeopleHate, which were dedicated to spreading the QAnon conspiracy and body-shaming individuals, respectively. Following the ban, both communities partially migrated to Voat, an unmoderated Reddit clone. We estimate how many users migrate, finding that users in the conspiracy community are much more likely to leave Reddit altogether and join Voat. Then, we quantify the behavioral shift within Reddit and across Reddit and Voat by matching common users. While in general the activity of users is lower on the new platform, GreatAwakening users who decided to completely leave Reddit maintain a similar level of activity on Voat. Toxicity strongly increases on Voat in both communities. Finally, conspiracy users migrating from Reddit tend to recreate their previous social network on Voat. Our findings suggest that banning conspiracy communities hosting violent content should be carefully designed, as these communities may be more resilient to deplatforming.
Collapse
Affiliation(s)
- Corrado Monti
- CENTAI Institute, Corso Inghilterra 3, Torino (TO) 10138, Italy
| | - Matteo Cinelli
- Department of Computer Science, Sapienza University of Rome, Viale Regina Elena 295, Roma (RM) 00161, Italy
| | - Carlo Valensise
- Centro Ricerche Enrico Fermi, Piazza del Viminale 1, Roma (RM) 00184, Italy
| | - Walter Quattrociocchi
- Department of Computer Science, Sapienza University of Rome, Viale Regina Elena 295, Roma (RM) 00161, Italy
| | - Michele Starnini
- CENTAI Institute, Corso Inghilterra 3, Torino (TO) 10138, Italy
- Departament de Fisica, Universitat Politecnica de Catalunya, Campus Nord, Barcelona 08034, Spain
| |
Collapse
|
3
|
Schneider PJ, Rizoiu MA. The effectiveness of moderating harmful online content. Proc Natl Acad Sci U S A 2023; 120:e2307360120. [PMID: 37579139 PMCID: PMC10450446 DOI: 10.1073/pnas.2307360120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 06/28/2023] [Indexed: 08/16/2023] Open
Abstract
In 2022, the European Union introduced the Digital Services Act (DSA), a new legislation to report and moderate harmful content from online social networks. Trusted flaggers are mandated to identify harmful content, which platforms must remove within a set delay (currently 24 h). Here, we analyze the likely effectiveness of EU-mandated mechanisms for regulating highly viral online content with short half-lives. We deploy self-exciting point processes to determine the relationship between the regulated moderation delay and the likely harm reduction achieved. We find that harm reduction is achievable for the most harmful content, even for fast-paced platforms such as Twitter. Our method estimates moderation effectiveness for a given platform and provides a rule of thumb for selecting content for investigation and flagging, managing flaggers' workload.
Collapse
Affiliation(s)
- Philipp J. Schneider
- College of Management of Technology, École Polytechnique Fédérale de Lausanne, CH-1015Lausanne, Switzerland
| | - Marian-Andrei Rizoiu
- Faculty of Engineering and Information Technology, The University of Technology Sydney, UltimoNSW2007, Australia
| |
Collapse
|
4
|
Wang S, Kim KJ. Content Moderation on Social Media: Does It Matter Who and Why Moderates Hate Speech? Cyberpsychol Behav Soc Netw 2023. [PMID: 37140448 DOI: 10.1089/cyber.2022.0158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Artificial intelligence (AI) has been increasingly integrated into content moderation to detect and remove hate speech on social media. An online experiment (N = 478) was conducted to examine how moderation agents (AI vs. human vs. human-AI collaboration) and removal explanations (with vs. without) affect users' perceptions and acceptance of removal decisions for hate speech targeting social groups with certain characteristics, such as religion or sexual orientation. The results showed that individuals exhibit consistent levels of perceived trustworthiness and acceptance of removal decisions regardless of the type of moderation agent. When explanations for the content takedown were provided, removal decisions made jointly by humans and AI were perceived as more trustworthy than the same decisions made by humans alone, which increased users' willingness to accept the verdict. However, this moderated mediation effect was only significant when Muslims, not homosexuals, were the target of hate speech.
Collapse
Affiliation(s)
- Sai Wang
- Department of Interactive Media, School of Communication, Hong Kong Baptist University, Kowloon, Hong Kong
| | - Ki Joon Kim
- Department of Media and Communication, City University of Hong Kong, Kowloon, Hong Kong
| |
Collapse
|
5
|
Horta Ribeiro M, Hosseinmardi H, West R, Watts DJ. Deplatforming did not decrease Parler users’ activity on fringe social media. PNAS Nexus 2023; 2:pgad035. [PMID: 36959908 PMCID: PMC10029837 DOI: 10.1093/pnasnexus/pgad035] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 11/25/2022] [Accepted: 01/23/2023] [Indexed: 03/24/2023]
Abstract
Online platforms have banned (“deplatformed”) influencers, communities, and even entire websites to reduce content deemed harmful. Deplatformed users often migrate to alternative platforms, which raises concerns about the effectiveness of deplatforming. Here, we study the deplatforming of Parler, a fringe social media platform, between 2020 January 11 and 2021 February 25, in the aftermath of the US Capitol riot. Using two large panels that capture longitudinal user-level activity across mainstream and fringe social media content (N = 112, 705, adjusted to be representative of US desktop and mobile users), we find that other fringe social media, such as Gab and Rumble, prospered after Parler’s deplatforming. Further, the overall activity on fringe social media increased while Parler was offline. Using a difference-in-differences analysis (N = 996), we then identify the causal effect of deplatforming on active Parler users, finding that deplatforming increased the probability of daily activity across other fringe social media in early 2021 by 10.9 percentage points (pp) (95% CI [5.9 pp, 15.9 pp]) on desktop devices, and by 15.9 pp (95% CI [10.2 pp, 21.7 pp]) on mobile devices, without decreasing activity on fringe social media in general (including Parler). Our results indicate that the isolated deplatforming of a major fringe platform was ineffective at reducing overall user activity on fringe social media.
Collapse
Affiliation(s)
| | - Homa Hosseinmardi
- Computational Social Science Lab, University of Pennsylvania, PA 19104, USA
| | - Robert West
- School of Computer and Communication Sciences, EPFL, 1015 Lausanne, Philadelphia, Switzerland
| | | |
Collapse
|
6
|
Kozyreva A, Herzog SM, Lewandowsky S, Hertwig R, Lorenz-Spreen P, Leiser M, Reifler J. Resolving content moderation dilemmas between free speech and harmful misinformation. Proc Natl Acad Sci U S A 2023; 120:e2210666120. [PMID: 36749721 DOI: 10.1073/pnas.2210666120] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people's judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents' decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.
Collapse
|
7
|
Pilati F, Gallotti R, Sacco PL. The link between reported cases of COVID-19 and the Infodemic Risk Index: A worldwide perspective. Front Sociol 2023; 7:1093354. [PMID: 36733979 PMCID: PMC9888028 DOI: 10.3389/fsoc.2022.1093354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 12/23/2022] [Indexed: 06/18/2023]
Abstract
In this brief report we followed the evolution of the COVID-19 Infodemic Risk Index during 2020 and clarified its connection with the epidemic waves, focusing specifically on their co-evolution in Europe, South America, and South-eastern Asia. Using 640 million tweets collected by the Infodemic Observatory and the open access dataset published by Our World in Data regarding COVID-19 worldwide reported cases, we analyze the COVID-19 infodemic vs. pandemic co-evolution from January 2020 to December 2020. We find that a characteristic pattern emerges at the global scale: a decrease in misinformation on Twitter as the number of COVID-19 confirmed cases increases. Similar local variations highlight how this pattern could be influenced both by the strong content moderation policy enforced by Twitter after the first pandemic wave and by the phenomenon of selective exposure that drives users to pick the most visible and reliable news sources available.
Collapse
Affiliation(s)
| | | | - Pier Luigi Sacco
- University of Studies G. d'Annunzio Chieti and Pescara, Chieti, Italy
| |
Collapse
|
8
|
Khullar A, Panjal P, Pandey R, Burnwal A, Raj P, Jha AA, Hitesh P, Reddy RJ, Himanshu, Seth A. Experiences with the Introduction of AI-based Tools for Moderation Automation of Voice-based Participatory Media Forum. India HCI 2021 (2021) 2021; 2021:30-39. [PMID: 38274667 PMCID: PMC10810263 DOI: 10.1145/3506469.3506473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Voice-based discussion forums where users can record audio messages which are then published for other users to listen and comment, are often moderated to ensure that the published audios are of good quality, relevant, and adhere to editorial guidelines of the forum. There is room for the introduction of AI-based tools in the moderation process, such as to identify and filter out blank or noisy audios, use speech recognition to transcribe the voice messages in text, and use natural language processing techniques to extract relevant metadata from the audio transcripts. We design such tools and deploy them within a social enterprise working in India that runs several voice-based discussion forums. We present our findings in terms of the time and cost-savings made through the introduction of these tools, and describe the feedback of the moderators towards the acceptability of AI-based automation in their workflow. Our work forms a case-study in the use of AI for automation of several routine tasks, and can be especially relevant for other researchers and practitioners involved with the use of voice-based technologies in developing regions of the world.
Collapse
|
9
|
Chen X, Huang C, Cheng Y. Identifiability, Risk, and Information Credibility in Discussions on Moral/Ethical Violation Topics on Chinese Social Networking Sites. Front Psychol 2020; 11:535605. [PMID: 33192777 PMCID: PMC7644537 DOI: 10.3389/fpsyg.2020.535605] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 08/17/2020] [Indexed: 11/24/2022] Open
Abstract
One heated argument in recent years concerns whether requiring real name supervision on social media will inhibit users’ participation in discoursing online speech. The current study explores the impact of identification, perceived anonymity, perceived risk, and information credibility on participating in discussions on moral/ethical violation events on social network sites (SNS) in China. In this study, we constructed a model based on the literature and tested it on a sample of 218 frequent SNS users. The results demonstrate the influence of identification and perception of anonymity: although the relationship between the two factors is negative, both are conducive to participation in discussion on moral/ethical violation topics, and information credibility also has a positive impact. The results confirmed the significance of risk perception on comments posted about moral/ethical violation. Our results have reference value for identity management and internet governance. Policies regarding users’ real names on the internet need to take into account the reliability of the identity authentication mechanism, as well as netizens’ perceptions of privacy about their identity and the necessity of guaranteeing content and information reliability online. We also offer some suggestions for future research, with a special emphasis on applicability to different cultures, contexts, and social networking sites.
Collapse
Affiliation(s)
- Xi Chen
- School of Business Administration and Tourism Management, Yunnan University, Kunming, China
| | - Chenli Huang
- School of Business Administration and Tourism Management, Yunnan University, Kunming, China
| | - Yi Cheng
- School of Business Administration and Tourism Management, Yunnan University, Kunming, China
| |
Collapse
|
10
|
Baker SA, Wade M, Walsh MJ. The challenges of responding to misinformation during a pandemic: content moderation and the limitations of the concept of harm. Media International Australia 2020; 177:103-107. [PMCID: PMC8263350 DOI: 10.1177/1329878x20951301] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Social media have been central in informing people about the COVID-19 pandemic. They influence the ways in which information is perceived, communicated and shared online, especially with physical distancing measures in place. While these technologies have given people the opportunity to contribute to public discussions about COVID-19, the narratives disseminated on social media have also been characterised by uncertainty, disagreement, false and misleading advice. Global technology companies have responded to these concerns by introducing new content moderation policies based on the concept of harm to tackle the spread of misinformation and disinformation online. In this essay, we examine some of the key challenges in implementing these policies in real time and at scale, calling for more transparent and nuanced content moderation strategies to increase public trust and the quality of information about the pandemic consumed online.
Collapse
Affiliation(s)
- Stephanie Alice Baker
- Stephanie Alice Baker, City, University of London, Northampton Square, Clerkenwell, London EC1V 0HB, UK.
| | | | | |
Collapse
|
11
|
Abstract
At the time of writing (mid-May 2020), mental health charities around the world have experienced an unprecedented surge in demand. At the same time, record-high numbers of people are turning to social media to maintain personal connections due to restrictions on physical movement. But organizations like the mental health charity Mind and even the UK Government have expressed concerns about the possible strain on mental health that may come from spending more time online during COVID-19. These concerns are unsurprising, as debates about the link between heavy social media use and mental illness raged long before the pandemic. But our newly heightened reliance on platforms to replace face-to-face communication has created even more pressure for social media companies to heighten their safety measures and protect their most vulnerable users. To develop and enact these changes, social media companies are reliant on their content moderation workforces, but the COVID-19 pandemic has presented them with two related conundrums: (1) recent changes to content moderation workforces means platforms are likely to be less safe than they were before the pandemic and (2) some of the policies designed to make social media platforms safer for people's mental health are no longer possible to enforce. This Social Media + Society: 2K essay will address these two challenges in depth.
Collapse
Affiliation(s)
- Ysabel Gerrard
- Ysabel Gerrard, Department of Sociological Studies, The University of Sheffield, Western Bank, Sheffield S10 2TU, UK.
| |
Collapse
|