1
|
Ryan CD, Henggeler E, Gilbert S, Schaul AJ, Swarthout JT. Exploring the GMO narrative through labeling: strategies, products, and politics. GM Crops Food 2024; 15:51-66. [PMID: 38402595 PMCID: PMC10896172 DOI: 10.1080/21645698.2024.2318027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 02/08/2024] [Indexed: 02/27/2024]
Abstract
Labels are influential signals in the marketplace intended to inform and to eliminate buyer confusion. Despite this, food labels continue to be the subject of debate. None more so than non-GMO (genetically modified organisms) labels. This manuscript provides a timeline of the evolution of GMO labels beginning with the early history of the anti-GMO movement to the current National Bioengineered Food Disclosure Standard in the United States. Using media and market intelligence data collected through Buzzsumo™ and Mintel™, public discourse of GMOs is analyzed in relation to sociopolitical events and the number of new food products with anti-GMO labels, respectively. Policy document and publication data is collected with Overton™ to illustrate the policy landscape for the GMO topic and how it has changed over time. Analysis of the collective data illustrates that while social media and policy engagement around the topic of GMOs has diminished over time, the number of new products with a GMO-free designation continues to grow. While discourse peaked at one point, and has since declined, our results suggest that the legacy of an anti-GMO narrative remains firmly embedded in the social psyche, evidenced by the continuing rise of products with GMO-free designation. Campaigns for GMO food labels to satisfy consumers' right to know were successful and the perceived need for this information now appears to be self-sustaining.
Collapse
Affiliation(s)
- Camille D Ryan
- Strategic Insights, Bayer Crop Science Canada,Calgary, Canada
| | | | - Samantha Gilbert
- E-Commerce Search and Catalog Analysis, Millipore Sigma, St. Louis, MO, USA
| | | | - John T Swarthout
- Regulatory Scientific Affairs, Bayer Crop Science, Chesterfield, MO, USA
| |
Collapse
|
2
|
Twycross R. Assisted dying: principles, possibilities, and practicalities. An English physician's perspective. BMC Palliat Care 2024; 23:99. [PMID: 38609945 PMCID: PMC11015689 DOI: 10.1186/s12904-024-01422-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 03/26/2024] [Indexed: 04/14/2024] Open
Abstract
It seems probable that some form of medically-assisted dying will become legal in England and Wales in the foreseeable future. Assisted dying Bills are at various stages of preparation in surrounding jurisdictions (Scotland, Republic of Ireland, Isle of Man, Jersey), and activists campaign unceasingly for a change in the law in England and Wales. There is generally uncritical supportive media coverage, and individual autonomy is seen as the unassailable trump card: 'my life, my death'.However, devising a law which is 'fit for purpose' is not an easy matter. The challenge is to achieve an appropriate balance between compassion and patient autonomy on the one hand, and respect for human life generally and medical autonomy on the other. More people should benefit from a change in the law than be harmed. In relation to medically-assisted dying, this may not be possible. Protecting the vulnerable is a key issue. Likewise, not impacting negatively on societal attitudes towards the disabled and frail elderly, particularly those with dementia.This paper compares three existing models of physician-assisted suicide: Switzerland, Oregon (USA), and Victoria (Australia). Vulnerability and autonomy are discussed, and concern expressed about the biased nature of much of the advocacy for assisted dying, tantamount to disinformation. A 'hidden' danger of assisted dying is noted, namely, increased suffering as more patients decline referral to palliative-hospice care because they fear they will be 'drugged to death'.Finally, suggestions are made for a possible 'least worse' way forward. One solution would seem to be for physician-assisted suicide to be the responsibility of a stand-alone Department for Assisted Dying overseen by lawyers or judges and operated by technicians. Doctors would be required only to confirm a patient's medical eligibility. Palliative-hospice care should definitely not be involved, and healthcare professionals must have an inviolable right to opt out of involvement. There is also an urgent need to improve the provision of care for all terminally ill patients.
Collapse
Affiliation(s)
- Robert Twycross
- Emeritus Clinical Reader in Palliative Medicine, Oxford University, Oxford, UK.
- Sir Michael Sobell House, Churchill Hospital, Old Rd, Headington, Oxford, OX3 7LE, UK.
| |
Collapse
|
3
|
Mayo R. Trust or distrust? Neither! The right mindset for confronting disinformation. Curr Opin Psychol 2024; 56:101779. [PMID: 38134524 DOI: 10.1016/j.copsyc.2023.101779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/02/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
A primary explanation for why individuals believe disinformation is the truth bias, a predisposition to accept information as true. However, this bias is context-dependent, as research shows that rejection becomes the predominant process in a distrust mindset. Consequently, trust and distrust emerge as pivotal factors in addressing disinformation. The current review offers a more nuanced perspective by illustrating that whereas distrust may act as an antidote to the truth bias, it can also paradoxically serve as a catalyst for belief in disinformation. The review concludes that mindsets other than those rooted solely in trust (or distrust), such as an evaluative mindset, may prove to be more effective in detecting and refuting disinformation.
Collapse
Affiliation(s)
- Ruth Mayo
- The Hebrew University of Jerusalem, Israel.
| |
Collapse
|
4
|
Amazeen MA, Krishna A. Refuting misinformation: Examining theoretical underpinnings of refutational interventions. Curr Opin Psychol 2024; 56:101774. [PMID: 38101246 DOI: 10.1016/j.copsyc.2023.101774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 11/10/2023] [Accepted: 11/22/2023] [Indexed: 12/17/2023]
Abstract
With the proliferation of misinformation have come toolkits that include refutation strategies to target the beliefs of individuals that can be employed preemptively (prebunking) or reactively (debunking). Whereas the theoretical lineage of prebunking is well established within the literature on inoculation theory, the theoretical underpinning of debunking is not. Recent advances in inoculation theory include fostering resistance to a type of message rather than to the content of a message as well as application of messages in both prophylactic and therapeutic situations. Theoretically rooted within studies of conceptual change, fresh insights on debunking interventions derive from knowledge revision models, the Misinformation Receptivity Framework, as well as empirical evidence on the efficacy of narrative correctives and counters.
Collapse
Affiliation(s)
| | - Arunima Krishna
- College of Communication, Boston University, Boston, MA, USA
| |
Collapse
|
5
|
Almotairy BM, Abdullah M, Alahmadi DH. Dataset for detecting and characterizing Arab computation propaganda on X. Data Brief 2024; 53:110089. [PMID: 38328292 PMCID: PMC10847467 DOI: 10.1016/j.dib.2024.110089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 01/15/2024] [Accepted: 01/16/2024] [Indexed: 02/09/2024] Open
Abstract
Arab nations are greatly influenced by computational propaganda. Detecting Arab computational propaganda has become a trending topic in social media research. Despite all the efforts made, the definitive definition of a propagandistic characteristic is still not clear. Additionally, the earlier datasets were acquired and labelled for a specific study but were neglected thereafter. As a result, researchers are unable to assess whether the proposed AI detectors can be generalized or not. There is a lack of real ground truth, either to characterize Arab propagandist behaviours or evaluate the new proposed detectors. The provided dataset aims to demonstrate the value of characterizing Arab computational propaganda on X (Twitter) to close the research gap. It is prepared using a scientific approach to guarantee data quality. To ensure the quality of the data, the propagandist users' data was requested from the X Transparency center. Although the data released by X relates to propagandist users, at their level, the tweets were not classified as propaganda or not. Usually, propagandists mix propaganda and non-propaganda tweets to hide their identities. Therefore, three journalist volunteers were employed to label 2100 tweets for either propaganda or not and then label the propagandist tweet according to the propaganda technique used. The dataset covers sports and banking issues. As a result, the dataset consists of 16,355,558 tweets with their meta data from propagandist users in 2019. Plus, 2100 propagandists labelled tweets. The propagandist's dataset helps the research community apply supervised and unsupervised machine learning and deep learning algorithms to classify the credibility of Arab tweets and users. On the other hand, this paper suggests looking at behaviour rather than content to distinguish propaganda communication. The datasets enable deep non-textual analysis to investigate the main characteristics of Arab computational propaganda on X.
Collapse
Affiliation(s)
- Bodor Moheel Almotairy
- Department of Information systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Manal Abdullah
- Department of Information systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Dimah Hussein Alahmadi
- Department of Information systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
6
|
Swire-Thompson B, Johnson S. Cancer: A model topic for misinformation researchers. Curr Opin Psychol 2024; 56:101775. [PMID: 38101247 PMCID: PMC10939853 DOI: 10.1016/j.copsyc.2023.101775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 11/22/2023] [Accepted: 11/24/2023] [Indexed: 12/17/2023]
Abstract
Although cancer might seem like a niche subject, we argue that it is a model topic for misinformation researchers, and an ideal area of application given its importance for society. We first discuss the prevalence of cancer misinformation online and how it has the potential to cause harm. We next examine the financial incentives for those who profit from disinformation dissemination, how people with cancer are a uniquely vulnerable population, and why trust in science and medical professionals is particularly relevant to this topic. We finally discuss how belief in cancer misinformation has clear objective consequences and can be measured with treatment adherence and health outcomes such as mortality. In sum, cancer misinformation could assist the characterization of misinformation beliefs and be used to develop tools to combat misinformation in general.
Collapse
Affiliation(s)
- Briony Swire-Thompson
- Northeastern University, Network Science Institute, Department of Political Science, Department of Psychology, 177 Huntington Ave, Boston, USA.
| | - Skyler Johnson
- University of Utah, Huntsman Cancer Institute, 1950 Circle of Hope Dr, Salt Lake City, USA
| |
Collapse
|
7
|
Schüz B, Jones C. [Mis- and disinformation in social media: mitigating risks in digital health communication]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2024; 67:300-307. [PMID: 38332143 DOI: 10.1007/s00103-024-03836-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 01/15/2024] [Indexed: 02/10/2024]
Abstract
Misinformation and disinformation in social media have become a challenge for effective public health measures. Here, we examine factors that influence believing and sharing false information, both misinformation and disinformation, at individual, social, and contextual levels and discuss intervention possibilities.At the individual level, knowledge deficits, lack of skills, and emotional motivation have been associated with believing in false information. Lower health literacy, a conspiracy mindset and certain beliefs increase susceptibility to false information. At the social level, the credibility of information sources and social norms influence the sharing of false information. At the contextual level, emotions and the repetition of messages affect belief in and sharing of false information.Interventions at the individual level involve measures to improve knowledge and skills. At the social level, addressing social processes and social norms can reduce the sharing of false information. At the contextual level, regulatory approaches involving social networks is considered an important point of intervention.Social inequalities play an important role in the exposure to and processing of misinformation. It remains unclear to which degree the susceptibility to belief in and share misinformation is an individual characteristic and/or context dependent. Complex interventions are required that should take into account multiple influencing factors.
Collapse
Affiliation(s)
- Benjamin Schüz
- Institut für Public Health und Pflegeforschung, Universität Bremen, Grazer Straße 4, 28359, Bremen, Deutschland.
- Leibniz-WissenschaftsCampus Digital Public Health, Bremen, Deutschland.
| | - Christopher Jones
- Institut für Public Health und Pflegeforschung, Universität Bremen, Grazer Straße 4, 28359, Bremen, Deutschland
- Leibniz-WissenschaftsCampus Digital Public Health, Bremen, Deutschland
- Zentrum für Präventivmedizin und Digitale Gesundheit (CPD), Medizinische Fakultät Mannheim der Universität Heidelberg, Mannheim, Deutschland
| |
Collapse
|
8
|
Blair RA, Gottlieb J, Nyhan B, Paler L, Argote P, Stainfield CJ. Interventions to counter misinformation: Lessons from the Global North and applications to the Global South. Curr Opin Psychol 2024; 55:101732. [PMID: 38070207 DOI: 10.1016/j.copsyc.2023.101732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 11/03/2023] [Accepted: 11/06/2023] [Indexed: 01/28/2024]
Abstract
We synthesize evidence from 176 experimental estimates of 11 interventions intended to combat misinformation in the Global North and Global South, which we classify as informational, educational, sociopsychological, or institutional. Among these, we find the most consistent positive evidence for two informational interventions in both Global North and Global South contexts: inoculation/prebunking and debunking. In a complementary survey of 138 misinformation scholars and practitioners, we find that experts tend to be most optimistic about interventions that have been least widely studied or that have been shown to be mostly ineffective. We provide a searchable database of misinformation randomized controlled trials and suggest avenues for future research to close the gap between expert opinion and academic research.
Collapse
Affiliation(s)
- Robert A Blair
- Department of Political Science and Watson Institute for International and Public Affairs, Brown University, United States
| | - Jessica Gottlieb
- Hobby School of Public Affairs, University of Houston, United States
| | - Brendan Nyhan
- Department of Government, Dartmouth College, United States.
| | - Laura Paler
- Department of Government, School of Public Affairs, American University, United States
| | - Pablo Argote
- Department of Political Science and International Relations, University of Southern California, United States
| | | |
Collapse
|
9
|
Bogart S, Lees J. Meta-perception and misinformation. Curr Opin Psychol 2023; 54:101717. [PMID: 37972526 DOI: 10.1016/j.copsyc.2023.101717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/19/2023] [Accepted: 10/20/2023] [Indexed: 11/19/2023]
Abstract
Research on political misperceptions is flourishing across disciplines. Literature on misinformation susceptibility and political group meta-perceptions have arisen independently, both seeking to understand how inaccurate social beliefs of the first and second order respectively contribute to political polarization. Here we review these literatures and argue for greater integration. We highlight four domains where these two literatures intersect: how inaccurate group meta-perceptions may increase misinformation susceptibility, how misinformation may itself convey inaccurate second-order information, how second-order perceptions of misinformation belief may increase misinformation susceptibility, and how reputational concerns may affect misinformation engagement. Our hope is to illuminate fruitful avenues of future research and inspire scholars of political misperceptions to pursue unified theoretical models of how misperceptions drive negative political outcomes.
Collapse
Affiliation(s)
- Sean Bogart
- Department of Psychology, Ohio University, USA
| | - Jeffrey Lees
- Andlinger Center for Energy and the Environment, Princeton University, USA; School of Public and International Affairs, Princeton University, USA.
| |
Collapse
|
10
|
Bilal A, Hoogensen Gjørv G, Lanteigne M, Brancaleoni R, Gjørv J, Gui D, Kielar JK, Aluola C, Magalini S. Comprehensive security, disinformation, and COVID-19: An analysis of the impacts of mis- and disinformation and populist narratives during the pandemic. Open Res Eur 2023; 3:209. [PMID: 38515932 PMCID: PMC10956505 DOI: 10.12688/openreseurope.16733.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/08/2023] [Indexed: 03/23/2024]
Abstract
The COVID-19 pandemic has generated many fundamental and challenging implications regarding security, for both states and people. This article addresses the pandemic as a security threat, whereby societal and human dimensions of security are intertwined with the narrower (so-called traditional) state dimensions, culminating in comprehensive security. This article uses mixed methods, combining desk research and a selection of narratives or stories from several parts of the world that signify how the intersection of disinformation and populist discourses exacerbated the COVID-19 security challenges. These are analysed through an innovative comprehensive security analytical approach. Drawing on both security theory and policy, the article examines how the COVID-19 pandemic jeopardised security on multiple levels. First, the state's capacity to effectively act and deliver in the domestic sphere waned. Second, the social contract between the state and its citizens eroded as public trust dissipated. This article argues, however, that the most pervasive threat to security during the pandemic pertained to the exploitation of the information domain in relation to the state, society, and people. The article interrogates how mis- and disinformation about the pandemic compounded and exacerbated the security challenges it posed, often relying on existing narratives within right-wing populism movements to increase mistrust and discontent. These largely right-wing populist narratives contributed to broadening the gap between states and people, besides weakening public compliance with state health security measures. The nature of populism and the narratives of particularly right-wing populism contributed to increases in fragmentation, polarisation, and discrimination impacting societal trust. The article concludes with recommendations to mitigate the adverse impacts of mis- and disinformation, including reinvigorating the relationship between state institutions and the people to strengthen comprehensive security.
Collapse
Affiliation(s)
- Arsalan Bilal
- Centre for Peace Studies, UiT The Arctic University of Norway, Tromsø, Troms, N9037, Norway
| | | | - Marc Lanteigne
- Department of Social Sciences, UiT The Arctic University of Norway, Tromsø, Troms, N9037, Norway
| | | | - Jardar Gjørv
- Centre for Peace Studies, UiT The Arctic University of Norway, Tromsø, Troms, N9037, Norway
| | - Daniele Gui
- Universita Cattolica del Sacro Cuore, Milan, Lombardy, Italy
| | | | - Caleb Aluola
- Centre for Peace Studies, UiT The Arctic University of Norway, Tromsø, Troms, N9037, Norway
| | - Sabina Magalini
- Universita Cattolica del Sacro Cuore, Milan, Lombardy, Italy
| |
Collapse
|
11
|
Ramirez LG, Wickner PG, Cline NB, Rehman N, Wu AC, Pien LC, Stukus D. How Likes and Retweets Impacted Our Patients During the COVID-19 Pandemic. J Allergy Clin Immunol Pract 2023; 11:3356-3364. [PMID: 37536500 DOI: 10.1016/j.jaip.2023.07.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/13/2023] [Accepted: 07/17/2023] [Indexed: 08/05/2023]
Abstract
The growing dependence on social media for health-related information boomed during the COVID-19 pandemic, posing unprecedented challenges in navigating the vast amounts of information available right at our fingertips. Social media had a major impact on clinical decision-making affecting individuals, communities, and societies at large. In this review, we discuss the role of social media in amplifying information and misinformation as well as factors contributing to its reliance and prevalence. We review how medical providers have been impacted by this changing landscape, useful communication strategies to employ with in-office patient encounters, and how we can be active players in using social media as a tool for health promotion, correcting misinformation, and preparing for future pandemics.
Collapse
Affiliation(s)
- Lourdes G Ramirez
- Division of Allergy and Clinical Immunology, Brigham and Women's Hospital and Harvard Medical School, Boston, Mass.
| | - Paige G Wickner
- Division of Allergy and Clinical Immunology, Brigham and Women's Hospital and Harvard Medical School, Boston, Mass
| | - Nicholas B Cline
- Department of Allergy and Clinical Immunology, Respiratory Institute, Cleveland Clinic, Cleveland, Ohio
| | - Narmeen Rehman
- Department of Population Medicine, Harvard Pilgrim Health Care Institute and Harvard Medical School, Boston, Mass
| | - Ann Chen Wu
- Department of Population Medicine, Harvard Pilgrim Health Care Institute and Harvard Medical School, Boston, Mass
| | - Lily C Pien
- Department of Allergy and Clinical Immunology, Respiratory Institute, Cleveland Clinic, Cleveland, Ohio; Cleveland Clinic Lerner College of Medicine, Case Western Reserve University, Cleveland, Ohio; Office of Educator and Scholar Development, Education Institute, Cleveland Clinic, Cleveland, Ohio
| | - David Stukus
- Division of Allergy and Immunology, Nationwide Children's Hospital, Department of Pediatrics, College of Medicine, the Ohio State University, Columbus, Ohio
| |
Collapse
|
12
|
Ahmed S, Chua HW. Perception and deception: Exploring individual responses to deepfakes across different modalities. Heliyon 2023; 9:e20383. [PMID: 37810833 PMCID: PMC10556585 DOI: 10.1016/j.heliyon.2023.e20383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 09/14/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023] Open
Abstract
This study is one of the first to investigate the relationship between modalities and individuals' tendencies to believe and share different forms of deepfakes (also deep fakes). Using an online survey experiment conducted in the US, participants were randomly assigned to one of three disinformation conditions: video deepfakes, audio deepfakes, and cheap fakes to test the effect of single modality against multimodality and how it affects individuals' perceived claim accuracy and sharing intentions. In addition, the impact of cognitive ability on perceived claim accuracy and sharing intentions between conditions are also examined. The results suggest that individuals are likelier to perceive video deepfakes as more accurate than cheap fakes, but not audio deepfakes. Yet, individuals are more likely to share video deepfakes than cheap and audio deepfakes. We also found that individuals with high cognitive ability are less likely to perceive deepfakes as accurate or share them across formats. The findings emphasize that deepfakes are not monolithic, and associated modalities should be considered when studying user engagement with deepfakes.
Collapse
|
13
|
Sundelson AE, Jamison AM, Huhn N, Pasquino SL, Sell TK. Fighting the infodemic: the 4 i Framework for Advancing Communication and Trust. BMC Public Health 2023; 23:1662. [PMID: 37644563 PMCID: PMC10466697 DOI: 10.1186/s12889-023-16612-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 08/24/2023] [Indexed: 08/31/2023] Open
Abstract
BACKGROUND The proliferation of false and misleading health claims poses a major threat to public health. This ongoing "infodemic" has prompted numerous organizations to develop tools and approaches to manage the spread of falsehoods and communicate more effectively in an environment of mistrust and misleading information. However, these tools and approaches have not been systematically characterized, limiting their utility. This analysis provides a characterization of the current ecosystem of infodemic management strategies, allowing public health practitioners, communicators, researchers, and policy makers to gain an understanding of the tools at their disposal. METHODS A multi-pronged search strategy was used to identify tools and approaches for combatting health-related misinformation and disinformation. The search strategy included a scoping review of academic literature; a review of gray literature from organizations involved in public health communications and misinformation/disinformation management; and a review of policies and infodemic management approaches from all U.S. state health departments and select local health departments. A team of annotators labelled the main feature(s) of each tool or approach using an iteratively developed list of tags. RESULTS We identified over 350 infodemic management tools and approaches. We introduce the 4 i Framework for Advancing Communication and Trust (4 i FACT), a modified social-ecological model, to characterize different levels of infodemic intervention: informational, individual, interpersonal, and institutional. Information-level strategies included those designed to amplify factual information, fill information voids, debunk false information, track circulating information, and verify, detect, or rate the credibility of information. Individual-level strategies included those designed to enhance information literacy and prebunking/inoculation tools. Strategies at the interpersonal/community level included resources for public health communicators and community engagement approaches. Institutional and structural approaches included resources for journalists and fact checkers, tools for managing academic/scientific literature, resources for infodemic researchers/research, resources for infodemic managers, social media regulation, and policy/legislation. CONCLUSIONS The 4 i FACT provides a useful way to characterize the current ecosystem of infodemic management strategies. Recognizing the complex and multifaceted nature of the ongoing infodemic, efforts should be taken to utilize and integrate strategies across all four levels of the modified social-ecological model.
Collapse
Affiliation(s)
- Anne E Sundelson
- Johns Hopkins Center for Health Security, 700 E. Pratt Street, Suite 900, Baltimore, MD, 21202, USA.
- Department of Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe Street, Room E7527, Baltimore, MD, 21205, USA.
| | - Amelia M Jamison
- Department of Health, Behavior and Society, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe Street, Baltimore, MD, 21205, USA
| | - Noelle Huhn
- Johns Hopkins Center for Health Security, 700 E. Pratt Street, Suite 900, Baltimore, MD, 21202, USA
- Department of Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe Street, Room E7527, Baltimore, MD, 21205, USA
| | - Sarah-Louise Pasquino
- Department of International Health, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe Street, Baltimore, MD, 21205, USA
| | - Tara Kirk Sell
- Johns Hopkins Center for Health Security, 700 E. Pratt Street, Suite 900, Baltimore, MD, 21202, USA
- Department of Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe Street, Room E7527, Baltimore, MD, 21205, USA
| |
Collapse
|
14
|
Blum A. Physician, Steel Thyself. J Am Board Fam Med 2023; 36:692-694. [PMID: 37468218 DOI: 10.3122/jabfm.2023.230108r1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 04/05/2023] [Accepted: 04/17/2023] [Indexed: 07/21/2023] Open
Abstract
A family physician who has long tried to prevent patients from falling victim to health scams and disinformation recently confronted an unexpected challenge of his own.
Collapse
Affiliation(s)
- Alan Blum
- From the Department of Family, Internal, and Rural Medicine, University of Alabama School of Medicine, Tuscaloosa, AL, USA (AB).
| |
Collapse
|
15
|
Arribas CM, Arcos R, Gértrudix M, Mikulski K, Hernández-Escayola P, Teodor M, Novăcescu E, Surdu I, Stoian V, García-Jiménez A. Information manipulation and historical revisionism: Russian disinformation and foreign interference through manipulated history-based narratives. Open Res Eur 2023; 3:121. [PMID: 37736288 PMCID: PMC10509605 DOI: 10.12688/openreseurope.16087.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/14/2023] [Indexed: 09/23/2023]
Abstract
Background: Disinformation and historical revisionism have been acknowledged as tools for foreign interference that belong to the landscape of hybrid threats. Historical revisionism plays an essential role in Russian foreign policy towards the post-Soviet space and is in strong relation with the concepts of Near Abroad and Russkii Mir ('Russian World') and with certain ideas contained in the neo-Eurasianist Movement. This article examines Russian revisionist narratives disseminated in information and influencing campaigns in Europe and against the West. Methods: This study uses a mixed methodology combining desk research, including literature review, and analysis of the EUvsDisinfo database of cases identified before the February 2022 invasion of Ukraine. R esults: The manipulation of historical events has been largely employed by the Kremlin as a tool for foreign interference to achieve strategic objectives. First World War treaties, mainly the Trianon Peace Treaty, as well as the Second World War and the communist and fascist historical experiences in countries within the post-Soviet space, are the pivotal topics from which hostile influencing narratives are built. From the analysis of the EUvsDisinfo database, the article identifies seven topic themes. Conclusions: Our findings suggest that pre-emptively elaborated counter-narratives based on historical evidence and sound historiography can be an effective tool against hostile revisionist narratives that exploit vulnerabilities and specific target groups within European societies.
Collapse
Affiliation(s)
| | - Rubén Arcos
- Universidad Rey Juan Carlos, Móstoles, Community of Madrid, 28933, Spain
| | - Manuel Gértrudix
- Universidad Rey Juan Carlos, Móstoles, Community of Madrid, 28933, Spain
| | - Kamil Mikulski
- Universidad Rey Juan Carlos, Móstoles, Community of Madrid, 28933, Spain
| | | | - Mihaela Teodor
- Academia Nationala de Informatii Mihai Viteazul, Bucharest, Bucharest, Romania
| | - Elena Novăcescu
- Academia Nationala de Informatii Mihai Viteazul, Bucharest, Bucharest, Romania
| | - Ileana Surdu
- Academia Nationala de Informatii Mihai Viteazul, Bucharest, Bucharest, Romania
| | - Valentin Stoian
- Academia Nationala de Informatii Mihai Viteazul, Bucharest, Bucharest, Romania
| | | |
Collapse
|
16
|
Schulze A, Brand F, Leschzyk DK, Beuthner M, Biegert A, Bomnüter U, Boy B, Bucher HJ, Frau R, Hubig M, Löffelholz M, Mayer J, Pliquet C, Radechovsky J, Schleicher K, Ulbrich K. [Optimisation of risk and crisis communication of governments, authorities and public health institutions-challenges in long-lasting crises illustrated by the COVID-19 pandemic]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2023:10.1007/s00103-023-03708-1. [PMID: 37280440 PMCID: PMC10243687 DOI: 10.1007/s00103-023-03708-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 04/24/2023] [Indexed: 06/08/2023]
Abstract
The COVID-19 pandemic demonstrates the great importance of risk and crisis communication. In a dynamic situation, authorities and policymakers face the challenge of dealing with a large amount of data, reviewing it and communicating it in a way that is appropriate for diverse target groups. Comprehensible and unambiguous information on risks and options for action make a significant contribution to the objective and subjective safety of the population. Hence, there is a great need to use the experience gained from the pandemic to optimize risk and crisis communication.Digitalization enables multimodal arrangements - that is, the combination of text, graphics, icons, images, animations and sound. These arrangements play an increasingly important role in risk and crisis communication. It is of interest to what extent the communicative interaction of authorities, media and other public actors in crisis preparation and management in view of a complex public can be improved with the help of target group-specific communication and how legal certainty can be ensured for official and media practice. Accordingly, the article pursues three objectives:1. It describes the challenges faced by authorities and media actors in pandemic communication.2. It shows the role of multimodal arrangements as well as the necessary research perspectives to grasp the complexity of communicative crisis management in the federal system.3. It provides a rationale for how an interdisciplinary research network from the fields of media, communication and law can gain insights into the evidence-based use of multimodal communication.
Collapse
Affiliation(s)
- Annett Schulze
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland.
| | - Fabian Brand
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland
| | - Dinah Kristin Leschzyk
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland
| | - Michael Beuthner
- SRH Berlin School of Applied Sciences, Berlin School of Popular Arts, Berlin, Deutschland
| | - Alena Biegert
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland
| | - Udo Bomnüter
- Macromedia University of Applied Sciences, Campus Berlin, Berlin, Deutschland
| | - Bettina Boy
- Karlsruher Institut für Technologie (KIT), Karlsruhe, Deutschland
| | | | - Robert Frau
- Europa-Universität Viadrina, Frankfurt (Oder), Deutschland
| | - Marvin Hubig
- Europa-Universität Viadrina, Frankfurt (Oder), Deutschland
| | | | - Johanne Mayer
- Karlsruher Institut für Technologie (KIT), Karlsruhe, Deutschland
| | - Carolyn Pliquet
- SRH Berlin School of Applied Sciences, Berlin School of Popular Arts, Berlin, Deutschland
| | | | | | - Kirsten Ulbrich
- SRH Berlin School of Applied Sciences, Berlin School of Popular Arts, Berlin, Deutschland
| |
Collapse
|
17
|
Vasist PN, Chatterjee D, Krishnan S. The Polarizing Impact of Political Disinformation and Hate Speech: A Cross-country Configural Narrative. Inf Syst Front 2023:1-26. [PMID: 37361884 PMCID: PMC10106894 DOI: 10.1007/s10796-023-10390-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 06/28/2023]
Abstract
Information and communication technologies hold immense potential to enhance our lives and societal well-being. However, digital spaces have also emerged as a fertile ground for fake news campaigns and hate speech, aggravating polarization and posing a threat to societal harmony. Despite the fact that this dark side is acknowledged in the literature, the complexity of polarization as a phenomenon coupled with the socio-technical nature of fake news necessitates a novel approach to unravel its intricacies. In light of this sophistication, the current study employs complexity theory and a configurational approach to investigate the impact of diverse disinformation campaigns and hate speech in polarizing societies across 177 countries through a cross-country investigation. The results demonstrate the definitive role of disinformation and hate speech in polarizing societies. The findings also offer a balanced perspective on internet censorship and social media monitoring as necessary evils to combat the disinformation menace and control polarization, but suggest that such efforts may lend support to a milieu of hate speech that fuels polarization. Implications for theory and practice are discussed.
Collapse
Affiliation(s)
| | - Debashis Chatterjee
- Organizational Behavior and Human Resources Area, Indian Institute of Management Kozhikode, Kozhikode, Kerala India
| | - Satish Krishnan
- Information Systems Area, Indian Institute of Management Kozhikode, Kozhikode, Kerala India
| |
Collapse
|
18
|
Wolynn T, Hermann C, Hoffman BL. Social Media and Vaccine Hesitancy: Help Us Move the Needle. Pediatr Clin North Am 2023; 70:329-341. [PMID: 36841600 DOI: 10.1016/j.pcl.2022.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
Abstract
With more than 75% of parents and pediatric caregivers getting their health-related information online, reaching families on social media is a powerful way to leverage the trust built in the examination room to address vaccine hesitancy. This article first reviews the ways the antivaccine movement has leveraged social media to expand its considerable influence, and why social media companies have failed to reduce antivaccine misinformation and disinformation. Next, it reviews the barriers to adoption of social media-based communication by pediatric health-care providers and concludes with action-oriented items to increase the adoption of this powerful tool.
Collapse
Affiliation(s)
- Todd Wolynn
- Kids Plus Pediatrics, 4070 Beechwood Boulevard, Pittsburgh, PA 15217, USA
| | - Chad Hermann
- Kids Plus Pediatrics, 4070 Beechwood Boulevard, Pittsburgh, PA 15217, USA
| | - Beth L Hoffman
- Department of Behavioral and Community Health Sciences, University of Pittsburgh School of Public Health, 130 De Soto Street, Pittsburgh, PA 15261, USA; Center for Social Dynamics and Community Health, University of Pittsburgh School of Public Health, Pittsburgh, PA, USA.
| |
Collapse
|
19
|
Petratos PN, Faccia A. Fake news, misinformation, disinformation and supply chain risks and disruptions: risk management and resilience using blockchain. Ann Oper Res 2023; 327:1-28. [PMID: 37361081 PMCID: PMC9994786 DOI: 10.1007/s10479-023-05242-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 02/16/2023] [Indexed: 06/28/2023]
Abstract
Fake news, misinformation and disinformation have significantly increased over the past years, and they have a profound effect on societies and supply chains. This paper examines the relationship of information risks with supply chain disruptions and proposes blockchain applications and strategies to mitigate and manage them. We critically review the literature of SCRM and SCRES and find that information flows and risks are relatively attracting less attention. We contribute by suggesting that information integrates other flows, processes and operations, and it is an overarching theme that is essential in every part of the supply chain. Based on related studies we create a theoretical framework that incorporates fake news, misinformation and disinformation. To our knowledge, this is a first attempt to combine types of misleading information and SCRM/SCRES. We find that fake news, misinformation and disinformation can be amplified and cause larger supply chain disruptions, especially when they are exogenous and intentional. Finally, we present both theoretical and practical applications of blockchain technology to supply chain and find support that blockchain can actually advance risk management and resilience of supply chains. Cooperation and information sharing are effective strategies.
Collapse
|
20
|
Algarin AB, Yeager S, Patterson TL, Strathdee SA, Harvey-Vera A, Vera CF, Stamos-Buesig T, Artamanova I, Abramovitz D, Smith LR. The moderating role of resilience in the relationship between experiences of COVID-19 response-related discrimination and disinformation among people who inject drugs. Drug Alcohol Depend 2023; 246:109831. [PMID: 36924661 PMCID: PMC9981478 DOI: 10.1016/j.drugalcdep.2023.109831] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 02/27/2023] [Accepted: 02/28/2023] [Indexed: 03/06/2023]
Abstract
BACKGROUND Due to the persistence of COVID-19, it remains important to measure and examine potential barriers to COVID-19 prevention and treatment to avert additional loss of life, particularly among stigmatized populations, such as people who inject drugs (PWID), who are at high risk for contracting and spreading SARS-CoV-2. We assessed the psychometrics of a novel COVID-19 response-related discrimination scale among PWID, and characterized associations between COVID-19 response-related discrimination, resilience to adversity, and endorsement of COVID-19 disinformation. METHODS We assessed internal reliability, structural validity and construct validity of a 4-item COVID-19 response-related discrimination scale among PWID living in San Diego County, completing interviewer-administered surveys between October 2020 and September 2021. Using negative binomial regression, we assessed the relationship between COVID-19 response-related discrimination and disinformation and the potential moderating role of resilience. RESULTS Of 381 PWID, mean age was 42.6 years and the majority were male (75.6 %) and Hispanic (61.9 %). The COVID-19 response-related discrimination scale had modest reliability (α = 0.66, ω = 0.66) as a single construct with acceptable construct validity (all p ≤ 0.05). Among 216 PWID who completed supplemental surveys, a significant association between COVID-19 response-related discrimination and COVID-19 disinformation was observed, which was moderated by resilience (p = 0.044). Specifically, among PWID with high levels of resilience, endorsement of COVID-19 disinformation significantly increased as exposure to COVID-19 response-related discrimination increased (p = 0.011). CONCLUSIONS These findings suggest that intervening on COVID-19 response-related discrimination may offset the negative outcomes associated with COVID-19 disinformation.
Collapse
Affiliation(s)
- Angel B Algarin
- Center for Health Promotion and Disease Prevention, Edson College of Nursing and Health Innovation, Arizona State University - Downtown Campus, Phoenix, AZ, USA
| | - Samantha Yeager
- Division of Infectious Diseases and Global Public Health, University of California San Diego, San Diego, CA, USA
| | - Thomas L Patterson
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - Steffanie A Strathdee
- Division of Infectious Diseases and Global Public Health, University of California San Diego, San Diego, CA, USA
| | - Alicia Harvey-Vera
- Division of Infectious Diseases and Global Public Health, University of California San Diego, San Diego, CA, USA; Facultad de Medicina, Universidad Xochicalco, Tijuana, Mexico; United States-Mexico Border Health Commission, Tijuana, Mexico
| | - Carlos F Vera
- Division of Infectious Diseases and Global Public Health, University of California San Diego, San Diego, CA, USA
| | | | - Irina Artamanova
- Division of Infectious Diseases and Global Public Health, University of California San Diego, San Diego, CA, USA
| | - Daniela Abramovitz
- Division of Infectious Diseases and Global Public Health, University of California San Diego, San Diego, CA, USA
| | - Laramie R Smith
- Division of Infectious Diseases and Global Public Health, University of California San Diego, San Diego, CA, USA.
| |
Collapse
|
21
|
Balcaen P, Buts C, Bois CD, Tkacheva O. The effect of disinformation about COVID-19 on consumer confidence: Insights from a survey experiment. J Behav Exp Econ 2023; 102:101968. [PMID: 36531665 PMCID: PMC9733969 DOI: 10.1016/j.socec.2022.101968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 11/26/2022] [Accepted: 12/07/2022] [Indexed: 06/17/2023]
Abstract
Although the COVID-19 pandemic was accompanied by an infodemic about the origin of the virus and effectiveness of vaccines, little is known about the causal effect of this disinformation on the economy. This article fills in this void by examining the effects of disinformation about COVID-19 vaccines on consumer confidence by means of an original survey experiment in Dutch speaking communities of Belgium. Our findings show that the information set that impacts consumer confidence is much broader than previously assumed. We show that disinformation changes the perception of the effectiveness of vaccines which in turn indirectly impacts the future economic outlook, measured by the metric consumer confidence. Moreover, we find that the above effects are larger for respondents exposed to disinformation that is framed as containing 'scientific evidence' compared to 'conspiracy frames'.
Collapse
Affiliation(s)
- Pieter Balcaen
- The Royal Military Academy Hobbemastraat 184, 1000 Brussels, Belgium
| | - Caroline Buts
- The Vrije Universiteit Brussel Pleinlaan 2, 1050 Elsene, Belgium
| | - Cind Du Bois
- The Royal Military Academy Hobbemastraat 184, 1000 Brussels, Belgium
| | - Olesya Tkacheva
- Prof. Dr. Olesya Tkacheva is Assistant Professor at the Brussels School of Governance Pleinlaan 2, 1050 Elsene, Belgium
| |
Collapse
|
22
|
Whitehead HS, French CE, Caldwell DM, Letley L, Mounier-Jack S. A systematic review of communication interventions for countering vaccine misinformation. Vaccine 2023; 41:1018-1034. [PMID: 36628653 PMCID: PMC9829031 DOI: 10.1016/j.vaccine.2022.12.059] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 12/16/2022] [Accepted: 12/22/2022] [Indexed: 01/11/2023]
Abstract
BACKGROUND Misinformation and disinformation around vaccines has grown in recent years, exacerbated during the Covid-19 pandemic. Effective strategies for countering vaccine misinformation and disinformation are crucial for tackling vaccine hesitancy. We conducted a systematic review to identify and describe communications-based strategies used to prevent and ameliorate the effect of mis- and dis-information on people's attitudes and behaviours surrounding vaccination (objective 1) and examined their effectiveness (objective 2). METHODS We searched CINAHL, Web of Science, Scopus, MEDLINE, Embase, PsycInfo and MedRxiv in March 2021. The search strategy was built around three themes(1) communications and media; (2) misinformation; and (3) vaccines. For trials addressing objective 2, risk of bias was assessed using the Cochrane risk of bias in randomized trials tool (RoB2). RESULTS Of 2000 identified records, 34 eligible studies addressed objective 1, 29 of which also addressed objective 2 (25 RCTs and 4 before-and-after studies). Nine 'intervention approaches' were identified; most focused on content of the intervention or message (debunking/correctional, informational, use of disease images or other 'scare tactics', use of humour, message intensity, inclusion of misinformation warnings, and communicating weight of evidence), while two focused on delivery of the intervention or message (timing and source). Some strategies, such as scare tactics, appear to be ineffective and may increase misinformation endorsement. Communicating with certainty, rather than acknowledging uncertainty around vaccine efficacy or risks, was also found to backfire. Promising approaches include communicating the weight-of-evidence and scientific consensus around vaccines and related myths, using humour and incorporating warnings about encountering misinformation. Trying to debunk misinformation, informational approaches, and communicating uncertainty had mixed results. CONCLUSION This review identifies some promising communication strategies for addressing vaccine misinformation. Interventions should be further evaluated by measuring effects on vaccine uptake, rather than distal outcomes such as knowledge and attitudes, in quasi-experimental and real-life contexts.
Collapse
Affiliation(s)
- Hannah S. Whitehead
- NIHR Health Protection Research Unit in Vaccines and Immunisation, Department of Global Health Development, Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, United Kingdom
| | - Clare E. French
- NIHR Health Protection Research Unit in Behavioural Science and Evaluation, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Deborah M. Caldwell
- NIHR Health Protection Research Unit in Behavioural Science and Evaluation, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | | | - Sandra Mounier-Jack
- NIHR Health Protection Research Unit in Vaccines and Immunisation, Department of Global Health Development, Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, United Kingdom,Corresponding author at: London School of Hygiene and Tropical Medicine, Keppel St, London WC1E 7HT, United Kingdom
| |
Collapse
|
23
|
Zimmerman T, Shiroma K, Fleischmann KR, Xie B, Jia C, Verma N, Lee MK. Misinformation and COVID-19 vaccine hesitancy. Vaccine 2023; 41:136-144. [PMID: 36411132 PMCID: PMC9659512 DOI: 10.1016/j.vaccine.2022.11.014] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 10/22/2022] [Accepted: 11/08/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND COVID-19 vaccine hesitancy has emerged as a major public health challenge. Although medical and scientific misinformation has been known to fuel vaccine hesitancy in the past, misinformation surrounding COVID-19 seems to be rampant, and increasing evidence suggests that it is contributing to COVID-19 vaccine hesitancy today. The relationship between misinformation and COVID-19 vaccine hesitancy is complex, however, and it is relatively understudied. METHODS In this article, we report qualitative data from two related but distinct studies from a larger project. Study 1 included semi-structured, open-ended interviews conducted in October-November 2020 via phone with 30 participants to investigate the relationship between misinformation and COVID-19 vaccine hesitancy. Study 1's results then informed the design of open-ended questions for Study 2, an online survey conducted in May-June 2021 to consider the relationship between misinformation and vaccine hesitancy further. The data were examined with thematic analysis. RESULTS Study 1 led to the identification of positive and negative themes related to attitudes toward COVID-19 vaccines. In Study 2, responses from vaccine-hesitant participants included six categories of misinformation: medical, scientific, political, media, religious, and technological. Across both Study 1 and Study 2, six vaccine hesitancy themes were identified from the data: concerns about the vaccines' future effects, doubts about the vaccines' effectiveness, commercial profiteering, preference for natural immunity, personal freedom, and COVID-19 denial. CONCLUSIONS The relationship between misinformation and vaccine hesitancy is complicated. Various types of misinformation exist, with each related to a specific type of vaccine hesitancy-related attitude. Personal freedom and COVID-19 denial are vaccine attitudes of particular interest, representing important yet understudied phenomena. Medical and scientific approaches may not be sufficient to combat misinformation based in religion, media, or politics; and public health officials may benefit from partnering with experts from those fields to address harmful misinformation that is driving COVID-19 vaccine hesitancy.
Collapse
Affiliation(s)
| | | | | | - Bo Xie
- University of Texas at Austin, Austin, TX, USA
| | | | - Nitin Verma
- University of Texas at Austin, Austin, TX, USA
| | | |
Collapse
|
24
|
Aïmeur E, Amri S, Brassard G. Fake news, disinformation and misinformation in social media: a review. Soc Netw Anal Min 2023; 13:30. [PMID: 36789378 PMCID: PMC9910783 DOI: 10.1007/s13278-023-01028-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/07/2023] [Accepted: 01/12/2023] [Indexed: 02/12/2023]
Abstract
Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.
Collapse
Affiliation(s)
- Esma Aïmeur
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| | - Sabrine Amri
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| | - Gilles Brassard
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| |
Collapse
|
25
|
Houser RS, Koblentz GD, Lentzos F. Understanding Biosafety and Biosecurity in Ukraine. Health Secur 2023; 21:70-80. [PMID: 36629857 DOI: 10.1089/hs.2022.0095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
The Russian invasion of Ukraine in February 2022 was accompanied by unfounded Russian allegations of bioweapon activities in Ukraine conducted by the United States and its allies. While false, such allegations can cause substantial damage to disarmament efforts and international cooperation for strengthening disease surveillance and global health security. The purpose of this article is to describe Ukraine's biosafety, biosecurity, and dual-use policies and to provide important context for understanding the unwarranted Russian allegations. Moreover, the analysis of Ukraine's biorisk management system highlights some of the international efforts underway to ensure that life sciences research across the world is conducted safely, securely, and responsibly. With the help of international partners, Ukraine has strengthened its biorisk management governance, as well as identified areas for improvement that it is working to address.
Collapse
Affiliation(s)
- Ryan S Houser
- Ryan S. Houser, MHA, MPH, MSc, EMPS, NRAEMT, is a Research Associate, Department of War Studies, King's College London, United Kingdom; and a Student, Schar School of Policy and Government, George Mason University, Arlington, VA
| | - Gregory D Koblentz
- Gregory D. Koblentz, MPP, PhD, is an Associate Professor, Schar School of Policy and Government, George Mason University, Arlington, VA
| | - Filippa Lentzos
- Filippa Lentzos, PhD, is an Associate Professor, Department of War Studies, King's College London, United Kingdom
| |
Collapse
|
26
|
Wolfe CR, Eylem AA, Dandignac M, Lowe SR, Weber ML, Scudiere L, Reyna VF. Understanding the landscape of web-based medical misinformation about vaccination. Behav Res Methods 2023; 55:348-363. [PMID: 35380412 PMCID: PMC8981888 DOI: 10.3758/s13428-022-01840-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/17/2022] [Indexed: 11/16/2022]
Abstract
Given the high rates of vaccine hesitancy, web-based medical misinformation about vaccination is a serious issue. We sought to understand the nature of Google searches leading to medical misinformation about vaccination, and guided by fuzzy-trace theory, the characteristics of misinformation pages related to comprehension, inference-making, and medical decision-making. We collected data from web pages presenting vaccination information. We assessed whether web pages presented medical misinformation, had an overarching gist, used narrative, and employed emotional appeals. We used Search Engine Optimization tools to determine the number of backlinks from other web pages, monthly Google traffic, and Google Keywords. We used Coh-Metrix to measure readability and Gist Inference Scores (GIS). For medical misinformation web pages, Google traffic and backlinks were heavily skewed with means of 138.8 visitors/month and 805 backlinks per page. Medical misinformation pages were significantly more likely than other vaccine pages to have backlinks from other pages, and significantly less likely to receive at least one visitor from Google searches per month. The top Google searches leading to medical misinformation were "the truth about vaccinations," "dangers of vaccination," and "pro con vaccines." Most frequently, pages challenged vaccine safety, with 32.7% having an overarching gist, 7.7% presenting narratives, and 17.3% making emotional appeals. Emotional appeals were significantly more common with medical misinformation than other high-traffic vaccination pages. Misinformation pages had a mean readability grade level of 11.5, and a mean GIS of - 0.234. Low GIS scores are a likely barrier to understanding gist, and are the "Achilles' heel" of misinformation pages.
Collapse
Affiliation(s)
| | - Andrew A Eylem
- Department of Psychology, Miami University, Oxford, OH, 45056, USA
| | | | - Savannah R Lowe
- Department of Psychology, Miami University, Oxford, OH, 45056, USA
| | - Margo L Weber
- Department of Psychology, Miami University, Oxford, OH, 45056, USA
| | | | | |
Collapse
|
27
|
Alsmadi I, Rice NM, O’Brien MJ. Fake or not? Automated detection of COVID-19 misinformation and disinformation in social networks and digital media. Comput Math Organ Theory 2022:1-19. [PMID: 36466587 PMCID: PMC9702725 DOI: 10.1007/s10588-022-09369-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
With the continuous spread of the COVID-19 pandemic, misinformation poses serious threats and concerns. COVID-19-related misinformation integrates a mixture of health aspects along with news and political misinformation. This mixture complicates the ability to judge whether a claim related to COVID-19 is information, misinformation, or disinformation. With no standard terminology in information and disinformation, integrating different datasets and using existing classification models can be impractical. To deal with these issues, we aggregated several COVID-19 misinformation datasets and compared differences between learning models from individual datasets versus one that was aggregated. We also evaluated the impact of using several word- and sentence-embedding models and transformers on the performance of classification models. We observed that whereas word-embedding models showed improvements in all evaluated classification models, the improvement level varied among the different classifiers. Although our work was focused on COVID-19 misinformation detection, a similar approach can be applied to myriad other topics, such as the recent Russian invasion of Ukraine.
Collapse
Affiliation(s)
- Izzat Alsmadi
- Department of Computing and Cyber Security, Texas A&M University–San Antonio, San Antonio, USA
| | - Natalie Manaeva Rice
- Center for Information and Communication Studies, University of Tennessee, Knoxville, USA
| | - Michael J. O’Brien
- Department of Communication, History, and Philosophy, Department of Life Sciences, Texas A&M University, San Antonio, USA
| |
Collapse
|
28
|
Ng LHX, Cruickshank IJ, Carley KM. Coordinating Narratives Framework for cross-platform analysis in the 2021 US Capitol riots. Comput Math Organ Theory 2022; 29:1-17. [PMID: 36440374 PMCID: PMC9676754 DOI: 10.1007/s10588-022-09371-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Coordinated disinformation campaigns are used to influence social media users, potentially leading to offline violence. In this study, we introduce a general methodology to uncover coordinated messaging through an analysis of user posts on Parler. The proposed Coordinating Narratives Framework constructs a user-to-user coordination graph, which is induced by a user-to-text graph and a text-to-text similarity graph. The text-to-text graph is constructed based on the textual similarity of Parler and Twitter posts. We study three influential groups of users in the 6 January 2020 Capitol riots and detect networks of coordinated user clusters that post similar textual content in support of disinformation narratives related to the U.S. 2020 elections. We further extend our methodology to Twitter tweets to identify authors that share the same disinformation messaging as the aforementioned Parler user groups.
Collapse
Affiliation(s)
- Lynnette Hui Xian Ng
- CASOS, Institute for Software Research, Carnegie Mellon University, 4665 Forbes Avenue, Pittsburgh, 15213 PA USA
| | - Iain J. Cruickshank
- CASOS, Institute for Software Research, Carnegie Mellon University, 4665 Forbes Avenue, Pittsburgh, 15213 PA USA
| | - Kathleen M. Carley
- CASOS, Institute for Software Research, Carnegie Mellon University, 4665 Forbes Avenue, Pittsburgh, 15213 PA USA
| |
Collapse
|
29
|
Akhtar P, Ghouri AM, Khan HUR, Amin ul Haq M, Awan U, Zahoor N, Khan Z, Ashraf A. Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Ann Oper Res 2022; 327:1-25. [PMID: 36338350 PMCID: PMC9628472 DOI: 10.1007/s10479-022-05015-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 09/27/2022] [Indexed: 06/16/2023]
Abstract
Fake news and disinformation (FNaD) are increasingly being circulated through various online and social networking platforms, causing widespread disruptions and influencing decision-making perceptions. Despite the growing importance of detecting fake news in politics, relatively limited research efforts have been made to develop artificial intelligence (AI) and machine learning (ML) oriented FNaD detection models suited to minimize supply chain disruptions (SCDs). Using a combination of AI and ML, and case studies based on data collected from Indonesia, Malaysia, and Pakistan, we developed a FNaD detection model aimed at preventing SCDs. This model based on multiple data sources has shown evidence of its effectiveness in managerial decision-making. Our study further contributes to the supply chain and AI-ML literature, provides practical insights, and points to future research directions.
Collapse
Affiliation(s)
- Pervaiz Akhtar
- University of Aberdeen Business School, University of Aberdeen, King’s College, AB24 5UA Aberdeen, UK
- Imperial College London, SW7 2BU London, UK
| | - Arsalan Mujahid Ghouri
- Faculty of Management and Economics, Universiti Pendidikan Sultan Idris, Tanjong Malim, Malaysia
| | - Haseeb Ur Rehman Khan
- Faculty of Art, Computing, and Creative Industry, Universiti Pendidikan Sultan Idris, Tanjong Malim, Malaysia
| | - Mirza Amin ul Haq
- Department of Business Administration, Iqra University, Karachi, Pakistan
| | - Usama Awan
- Department of Business Administration, Inland School of Business and Social Sciences, Inland Norway University of Applied Sciences, Hamar, Norway
| | - Nadia Zahoor
- School of Business and Management, Queen Mary University of London, London, UK
| | - Zaheer Khan
- University of Aberdeen Business School, University of Aberdeen, King’s College, AB24 5UA Aberdeen, UK
- Innolab, University of Vaasa, Vaasa, Finland
| | - Aniqa Ashraf
- CAS-Key Laboratory of Crust-Mantle Materials and the Environments, School of Earth and Space Sciences, University of Science and Technology of China, 230026 Hefei, PR China
| |
Collapse
|
30
|
Darwish O, Tashtoush Y, Bashayreh A, Alomar A, Alkhaza’leh S, Darweesh D. A survey of uncover misleading and cyberbullying on social media for public health. Cluster Comput 2022; 26:1709-1735. [PMID: 36034676 PMCID: PMC9396598 DOI: 10.1007/s10586-022-03706-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 07/18/2022] [Accepted: 08/08/2022] [Indexed: 05/25/2023]
Abstract
Misleading health information is a critical phenomenon in our modern life due to advance in technology. In fact, social media facilitated the dissemination of information, and as a result, misinformation spread rapidly, cheaply, and successfully. Fake health information can have a significant effect on human behavior and attitudes. This survey presents the current works developed for misleading information detection (MLID) in health fields based on machine learning and deep learning techniques and introduces a detailed discussion of the main phases of the generic adopted approach for MLID. In addition, we highlight the benchmarking datasets and the most used metrics to evaluate the performance of MLID algorithms are discussed and finally, a deep investigation of the limitations and drawbacks of the current progressing technologies in various research directions is provided to help the researchers to use the most proper methods in this emerging task of MLID.
Collapse
Affiliation(s)
- Omar Darwish
- Information Security and Applied Computing, Eastern Michigan University, 900 Oakwood St, Ypsilanti, MI 48197 USA
| | - Yahya Tashtoush
- Department of Computer Science, Jordan University of Science and Technology, Irbid, 22110 Jordan
| | - Amjad Bashayreh
- Department of Computer Science, Jordan University of Science and Technology, Irbid, 22110 Jordan
| | - Alaa Alomar
- Department of Computer Science, Jordan University of Science and Technology, Irbid, 22110 Jordan
| | - Shahed Alkhaza’leh
- Department of Computer Science, Jordan University of Science and Technology, Irbid, 22110 Jordan
| | - Dirar Darweesh
- Department of Computer Science, Jordan University of Science and Technology, Irbid, 22110 Jordan
| |
Collapse
|
31
|
Kornides ML, Badlis S, Head KJ, Putt M, Cappella J, Gonzalez-Hernadez G. Exploring content of misinformation about HPV vaccine on twitter. J Behav Med 2022. [PMID: 35896853 DOI: 10.1007/s10865-022-00342-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 06/23/2022] [Indexed: 12/02/2022]
Abstract
Although social media can be a source of guidance about HPV vaccination for parents, the information may not always be complete or accurate. We conducted a retrospective content analysis to identify content and frequencies of occurrence of disinformation and misinformation about HPV vaccine posted on Twitter between December 15, 2019, through March 31, 2020, among 3876 unique, English language #HPV Tweets, excluding retweets. We found that 24% of Tweets contained disinformation or misinformation, and the remaining 76% contained support/education. The most prevalent categories of disinformation/misinformation were (1) adverse health effects (59%), (2) mandatory vaccination (19%), and (3) inefficacy of the vaccine (14%). Among the adverse health effects Tweets, non-specific harm/injury (51%) and death (23%) were most frequent. Disinformation/misinformation Tweets vs. supportive Tweets had 5.44 (95% CI 5.33–5.56) times the incidence rate of retweet. In conclusion, almost one-quarter of #HPV Tweets contained disinformation or misinformation about the HPV vaccine and these tweets received higher audience engagement including likes and retweets. Implications for vaccine hesitancy are discussed.
Collapse
|
32
|
Chiang AL, Jajoo K, Shivashankar R, Tofani CJ, Agarwal A, Rodriguez NJ, Chan WW. Scoping Out Misinformation: Assessing Factual Inaccuracies Among Popular Colonoscopy-Related Videos On Social Media. Gastro Hep Adv 2022; 1:923-925. [PMID: 36381606 PMCID: PMC9662594 DOI: 10.1016/j.gastha.2022.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Affiliation(s)
- Austin Lee Chiang
- Division of Gastroenterology and Hepatology, Thomas Jefferson University Hospitals, Philadelphia, PA
| | - Kunal Jajoo
- Division of Gastroenterology, Hepatology and Endoscopy, Brigham and Women’s Hospital, Boston, MA
- Harvard Medical School, Boston, MA
| | - Raina Shivashankar
- Division of Gastroenterology and Hepatology, Thomas Jefferson University Hospitals, Philadelphia, PA
| | - Christina J. Tofani
- Division of Gastroenterology and Hepatology, Thomas Jefferson University Hospitals, Philadelphia, PA
| | - Amit Agarwal
- Division of Gastroenterology and Hepatology, Thomas Jefferson University Hospitals, Philadelphia, PA
| | - Nicolette J. Rodriguez
- Division of Gastroenterology, Hepatology and Endoscopy, Brigham and Women’s Hospital, Boston, MA
- Harvard Medical School, Boston, MA
| | - Walter W. Chan
- Division of Gastroenterology, Hepatology and Endoscopy, Brigham and Women’s Hospital, Boston, MA
- Harvard Medical School, Boston, MA
| |
Collapse
|
33
|
Casino G. [Communication in times of pandemic: information, disinformation, and provisional lessons from the coronavirus crisis]. Gac Sanit 2022; 36 Suppl 1:S97-S104. [PMID: 35781157 PMCID: PMC9244671 DOI: 10.1016/j.gaceta.2022.01.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 12/31/2021] [Accepted: 01/03/2022] [Indexed: 11/30/2022]
Abstract
Objetivo Caracterizar la infodemia asociada a la pandemia de COVID-19 y analizar, en el caso de España, el consumo de información, la confianza en las fuentes, el papel de los medios y la comunicación gubernamental, así como discutir algunas lecciones de comunicación provisionales a partir de los estudios sobre esta y otras pandemias. Método Búsqueda bibliográfica en PubMed y Scopus, y revisión de los documentos seleccionados con los criterios de relevancia para los objetivos y el ámbito español. Resultados La pandemia de COVID-19 se ha solapado con una infodemia que ha provocado la mayor avalancha de desinformación conocida y dificultades para encontrar información fiable a casi la mitad de la población. En España, el consumo de información se ha concentrado en los medios tradicionales y WhatsApp. Los medios han ayudado a entender la pandemia y han sido relativamente bien valorados, aunque inspiran menos confianza que en otros países occidentales. El análisis de la comunicación del Gobierno español muestra algunos errores, como la difusión de mensajes demasiado tranquilizadores al inicio de la pandemia, la falta de transparencia, el exceso de información y el modelo de portavocía adoptado. Conclusiones El conocimiento de la infodemia asociada a la COVID-19 es fragmentario e insuficiente. Aunque la crisis sanitaria no está cerrada para ser debidamente evaluada, es posible extraer algunas lecciones comunicativas provisionales. La complejidad del fenómeno de la desinformación exige considerar la infodemiología como una disciplina científica para conocer la propagación de la desinformación igual que la de la enfermedad.
Collapse
Affiliation(s)
- Gonzalo Casino
- Centro Cochrane Iberoamericano, Instituto de Investigación Biomédica Sant Pau, Barcelona, España; Departamento de Comunicación, Universidad Pompeu Fabra, Barcelona, España.
| |
Collapse
|
34
|
Kużelewska E, Tomaszuk M. Rise of Conspiracy Theories in the Pandemic Times. Int J Semiot Law 2022; 35:2373-2389. [PMID: 35910405 PMCID: PMC9325658 DOI: 10.1007/s11196-022-09910-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/18/2022] [Indexed: 06/15/2023]
Abstract
COVID-19 pandemic occurred as an unexpected experience affecting all countries around the globe. In addition to the obvious health, economic and political effects, the COVID-19 pandemic triggered immense changes in the social spheres. People and institutions were forced to adjust to the new circumstances, change habits and move most or all of their activity online. In the completely virtual world, pandemic became a fertile ground for the bloom of the conspiracy theories already existing, but struggling for the global attention. The aim of the paper is to present three main conspiracy theories rapidly gaining popularity during the pandemic (the QAnon, anti-vaccination movements and anti-5G movements) and to analyse how they developed since the pandemic had been announced. In particular, the rising activity of the representatives of the movements will be analysed, as well as its acceleration in connection with pandemic and the resulting influence on social and political life. Finally, the paper will try examine whether the rapid development of conspiracy theories within societies has had any relations to the level of trust towards government-made decisions. The thesis being verified hereto is that pandemic accelerated the development of conspiracy theories due to the diminishing level of trust towards governments operating in the most difficult period in recent history. There are variety of reasons for the belief in conspiracy theories and they depend on the specificity of the theory and specificity of group of people it originates from. In general, it can be noted that all kind of conspiracies are developed by either (1) people who actually believe in them and are sharing them with good intentions (to warn other about the dangers hidden behind certain actions or institutions) or (2) malignant individuals whose aim is to discord or discredit an opponent or critic or, alternatively, distract attention from misconduct or lack of competence.
Collapse
|
35
|
Levine TR. Truth-default theory and the psychology of lying and deception detection. Curr Opin Psychol 2022; 47:101380. [PMID: 35763893 DOI: 10.1016/j.copsyc.2022.101380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 05/17/2022] [Accepted: 05/27/2022] [Indexed: 11/20/2022]
Abstract
Truth-default theory offers an account of human deceptive communication where people are honest unless they have a motive to deceive and people passively believe others unless suspicion and doubt are actively triggered. The theory is argued to account for wide swings in vulnerability to deception in different types of situations in and out of the lab. Three moderators are advanced to account for differential vulnerability to political misinformation and disinformation. Own belief congruity, social congruence, and message repetition are argued to combine to affect the probability that implausible and refutable false information is accepted as true.
Collapse
|
36
|
Neylan JH, Patel SS, Erickson TB. Strategies to Counter Disinformation for Healthcare Practitioners and Policymakers. World Med Health Policy 2022; 14:423-431. [PMID: 35755311 PMCID: PMC9216217 DOI: 10.1002/wmh3.487] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 09/20/2021] [Accepted: 11/02/2021] [Indexed: 12/04/2022]
Abstract
Medical disinformation has interfered with healthcare workers' ability to communicate with the general population in a wide variety of public health contexts globally. This has limited the effectiveness of evidence-based medicine and healthcare capacity. Disinformation campaigns often try to integrate or co-opt healthcare workers in their practices which hinders effective health communication. We describe a critical overview of issues health practitioners and communicators have experienced when dealing with medical disinformation online and offline as well as best practices to overcome these issues when disseminating health information. This article lists disinformation techniques that have yet to be used against the medical community but need to be considered in future communication planning as they may be highly effective. We also present broad policy recommendations and considerations designed to mitigate the effectiveness of medical disinformation campaigns.
Collapse
Affiliation(s)
- Julian H. Neylan
- Harvard Humanitarian InitiativeT.H. Chan Harvard School of Public HealthCambridgeMassachusettsUSA
| | - Sonny S. Patel
- Harvard Humanitarian InitiativeT.H. Chan Harvard School of Public HealthCambridgeMassachusettsUSA
- Department of Medicine and Health, Sydney School of Health SciencesThe University of SydneyNew South WalesAustralia
| | - Timothy B. Erickson
- Harvard Humanitarian InitiativeT.H. Chan Harvard School of Public HealthCambridgeMassachusettsUSA
- Division of Medical Toxicology, Department of Emergency MedicineMass General Brigham, Harvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
37
|
Vilella S, Semeraro A, Paolotti D, Ruffo G. Measuring user engagement with low credibility media sources in a controversial online debate. EPJ Data Sci 2022; 11:29. [PMID: 35602319 PMCID: PMC9108351 DOI: 10.1140/epjds/s13688-022-00342-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 04/25/2022] [Indexed: 06/15/2023]
Abstract
We quantify social media user engagement with low-credibility online news media sources using a simple and intuitive methodology, that we showcase with an empirical case study of the Twitter debate on immigration in Italy. By assigning the Twitter users an Untrustworthiness (U) score based on how frequently they engage with unreliable media outlets and cross-checking it with a qualitative political annotation of the communities, we show that such information consumption is not equally distributed across the Twitter users. Indeed, we identify clusters characterised by a very high presence of accounts that frequently share content from less reliable news sources. The users with high U are more keen to interact with bot-like accounts that tend to inject more unreliable content into the network and to retweet that content. Thus, our methodology applied to this real-world network provides evidence, in an easy and straightforward way, that there is strong interplay between accounts that display higher bot-like activity and users more focused on news from unreliable sources and that this influences the diffusion of this information across the network.
Collapse
Affiliation(s)
| | - Alfonso Semeraro
- Department of Computer Science, University of Turin, Turin, Italy
| | | | - Giancarlo Ruffo
- Department of Computer Science, University of Turin, Turin, Italy
| |
Collapse
|
38
|
Rastogi S, Bansal D. Disinformation detection on social media: An integrated approach. Multimed Tools Appl 2022; 81:40675-40707. [PMID: 35582207 PMCID: PMC9098146 DOI: 10.1007/s11042-022-13129-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 03/10/2021] [Accepted: 04/10/2022] [Indexed: 06/15/2023]
Abstract
The emergence of social media platforms has amplified the dissemination of false information in various forms. Social media gives rise to virtual societies by providing freedom of expression to users in a democracy. Due to the presence of echo chambers on social media, social science studies play a vital role in the spread of false news. To this aim, we provide a comprehensive framework that is adapted from several scholarly studies. The framework is capable of detecting information into various types, namely real, disinformation and satire based on authenticity as well as intention. The process highlights the use of interdisciplinary approaches derived from fundamental theories of social science and integrating them with modern computational tools and techniques. Few of these theories claim that malicious users suggest writing fabricated content in a different style to attract the audience. Style-based methods evaluate the intention i.e., the content is written with an intent to mislead the audience or not. However, the writing style can be deceptive. Thus, it is important to involve user-oriented social information to improve model strength. Therefore, the paper used an integrated approach by combining style based and propagation-based features with a total of thirty-one features. The extracted features are divided into ten categories: relative frequency, quantity, complexity, uncertainty, sentiment, subjectivity, diversity, informality, additional, and popularity. The features have been iteratively utilized by supervised classifiers and then selected the best-correlated ones using the ANOVA test. Our experimental results have shown that the selected features are able to distinguish real from disinformation and satirical news. It has been observed that the Ensemble machine learning model outperformed other models over the developed multi-labelled corpus.
Collapse
Affiliation(s)
- Shubhangi Rastogi
- Department of Computer Science and Engineering, Punjab Engineering College (Deemed to be University), Chandigarh, India
| | - Divya Bansal
- Department of Computer Science and Engineering, Punjab Engineering College (Deemed to be University), Chandigarh, India
| |
Collapse
|
39
|
Sousa-Silva R. Fighting the Fake: A Forensic Linguistic Analysis to Fake News Detection. Int J Semiot Law 2022; 35:2409-2433. [PMID: 35505837 PMCID: PMC9047580 DOI: 10.1007/s11196-022-09901-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/06/2022] [Indexed: 06/14/2023]
Abstract
Fake news has been the focus of debate, especially since the election of Donald Trump (2016), and remains a topic of concern in democratic countries worldwide, given (a) their threat to democratic systems and (b) the difficulty in detecting them. Despite the deployment of sophisticated computational systems to identify fake news, as well as the streamlining of fact-checking methods, appropriate fake news detection mechanisms have not yet been found. In fact, technological approaches are likely to be inefficient, given that fake news are based mostly on partisanship and identity politics, and not necessarily on outright deception. However, as disinformation is inherently expressed linguistically, this is a privileged room for forensic linguistic analysis. This article builds upon a forensic linguistic analysis of fake news pieces published in English and in Portuguese, which were collected since 2019 from acknowledged fake news outlets. The preliminary empirical analysis reveals that fake news pieces employ particular linguistic features, e.g. at the levels of typography, orthography and spelling, and morphosyntax. The systematic identification of these features, which will allow mapping linguistic resources and patterns used in those contexts, contributes to scholarship, not only by enabling a streamlined development of computational detection systems, but more importantly by permitting the forensic linguistics expert to assist criminal investigations and give evidence in court.
Collapse
Affiliation(s)
- Rui Sousa-Silva
- Faculty of Arts and Humanities, CLUP - Centre for Linguistics of the University of Porto, University of Porto, Porto, Portugal
- Faculdade de Letras, Universidade do Porto, Via Panorâmica, s/n, 4150-564 Porto, Portugal
| |
Collapse
|
40
|
Nygren T, Frau-Meigs D, Corbu N, Santoveña-Casal S. Teachers' views on disinformation and media literacy supported by a tool designed for professional fact-checkers: perspectives from France, Romania, Spain and Sweden. SN Soc Sci 2022; 2:40. [PMID: 35434642 DOI: 10.1007/s43545-022-00340-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 03/28/2022] [Indexed: 12/12/2022]
Abstract
The current media eco-system has become more and more polluted by the various avatars of “fake news”. This buzz term has been widely used by academics, experts, teachers and ordinary people, in an attempt to understand and address the phenomenon of information disorder in the new media environment. However, studies have rarely questioned what teachers, key stakeholders in the media literacy field, actually understand by “fake news”, and to what extent the new digital tools available to fact-check are actually viable solutions to fight disinformation actively. In this context, we conducted focus groups (N = 34 people interviewed in 4 focus groups) with teachers in four countries (France, Romania, Spain and Sweden), in order to assess their understanding of “fake news”, as well as their perception of possible measures to combat the phenomenon, with a particular focus on digital tools. The findings show that the understanding of the concept of “fake news” differs from one country to the other, but also within the same country, with a common feature across countries: intention to deceive. Additionally, respondents identified lack of media and information literacy (MIL) in education as a major gap for combatting information disorders. Furthermore, they find that the use of digital tool for professional fact-checking needs to be repurposed or followed by pedagogical instructions to fit into the complexity of educational practices. Our findings highlight possible solutions for MIL in education using a combination of technocognition and transliteracy as theoretical framework and scaffolded pedagogical design for better adoption of fact-checking techniques.
Collapse
|
41
|
Rice NM, Horne BD, Luther CA, Borycz JD, Allard SL, Ruck DJ, Fitzgerald M, Manaev O, Prins BC, Taylor M, Bentley RA. Monitoring event-driven dynamics on Twitter: a case study in Belarus. SN Soc Sci 2022; 2:36. [PMID: 35434643 PMCID: PMC8990676 DOI: 10.1007/s43545-022-00330-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 02/18/2022] [Indexed: 02/02/2023]
Abstract
Analysts of social media differ in their emphasis on the effects of message content versus social network structure. The balance of these factors may change substantially across time. When a major event occurs, initial independent reactions may give way to more social diffusion of interpretations of the event among different communities, including those committed to disinformation. Here, we explore these dynamics through a case study analysis of the Russian-language Twitter content emerging from Belarus before and after its presidential election of August 9, 2020. From these Russian-language tweets, we extracted a set of topics that characterize the social media data and construct networks to represent the sharing of these topics before and after the election. The case study in Belarus reveals how misinformation can be re-invigorated in discourse through the novelty of a major event. More generally, it suggests how audience networks can shift from influentials dispensing information before an event to a de-centralized sharing of information after it. Supplementary Information The online version contains supplementary material available at 10.1007/s43545-022-00330-x.
Collapse
Affiliation(s)
- Natalie M. Rice
- Center for Information and Communication Studies, University of Tennessee, Knoxville, TN 37996 USA
| | - Benjamin D. Horne
- School of Information Sciences, University of Tennessee, Knoxville, TN 37996 USA
| | - Catherine A. Luther
- School of Journalism and Electronic Media, University of Tennessee, Knoxville, TN 37996 USA
| | - Joshua D. Borycz
- Stevenson Science and Engineering Library, Vanderbilt University, Nashville, TN 37203 USA
| | - Suzie L. Allard
- School of Information Sciences, University of Tennessee, Knoxville, TN 37996 USA
| | - Damian J. Ruck
- School of Information Sciences, University of Tennessee, Knoxville, TN 37996 USA
| | - Michael Fitzgerald
- Political Science Department, University Tennessee, Knoxville, TN 37996 USA
| | - Oleg Manaev
- Center for Information and Communication Studies, University of Tennessee, Knoxville, TN 37996 USA
| | - Brandon C. Prins
- Political Science Department, University Tennessee, Knoxville, TN 37996 USA
| | - Maureen Taylor
- School of Communication, University of Technology Sydney, Sydney, NSW Australia
| | | |
Collapse
|
42
|
Brand F, Dendler L, Fiack S, Schulze A, Böl GF. [Risk communication of policy advising scientific organisations: a thematic outline using the example of the German Federal Institute for Risk Assessment]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2022; 65:599-607. [PMID: 35380241 PMCID: PMC8980784 DOI: 10.1007/s00103-022-03520-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 03/03/2022] [Indexed: 12/04/2022]
Abstract
Regulierungswissenschaftliche Organisationen wie das Bundesinstitut für Risikobewertung (BfR) sehen sich in ihrer wissenschaftsbasierten Risikokommunikation mit diversen Herausforderungen konfrontiert: Einerseits wird die Kommunikation gesundheitlicher Risiken immer komplexer und dementsprechend voraussetzungsreicher, weshalb unter anderem Fragen nach der Gesundheitskompetenz von Verbraucherinnen und Verbrauchern sowie zielgruppengerechter Risikokommunikation an Bedeutung gewinnen. Andererseits sehen sich die Wissensbestände regulierungswissenschaftlicher Organisationen zunehmend der Politisierung und öffentlichen Kritik ausgesetzt. In diesem Rahmen werden Fragen nach der Objektivität und Vertrauenswürdigkeit von Gutachten, Risikobewertungen und Stellungnahmen sowie der Legitimierung und Reputation regulierungswissenschaftlicher Organisationen relevant. Zusätzlich intensiviert wird dies durch das Aufkommen neuer Akteure in den sozialen Medien, die eigene Informations- und Kommunikationsmaterialien produzieren und veröffentlichen. In diesem Kontext verbreitete Fehl‑, Des- und Malinformationen stellen eine weitere Herausforderung dar, welche eng mit Fragen nach einer adäquaten Kommunikation über gesundheitliche Risiken sowie der Stabilisierung von Legitimität, Reputation und Vertrauenswürdigkeit zusammenhängt. Der Artikel diskutiert verschiedene Lösungsansätze, darunter die Optimierung und visuelle Aufbereitung von Gesundheitsinformationen, die Ermöglichung gesellschaftlicher Partizipation und die Einbettung dieser Maßnahmen in das strategische Stakeholder- und Reputationsmanagement. Der Beitrag schließt mit einem Aufruf zu offenerer Diskussion inhärenter Dilemmata.
Collapse
Affiliation(s)
- Fabian Brand
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland.
| | - Leonie Dendler
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland
| | - Suzan Fiack
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland
| | - Annett Schulze
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland
| | - Gaby-Fleur Böl
- Abteilung Risikokommunikation, Bundesinstitut für Risikobewertung (BfR), Max-Dohrn-Str. 8-10, 10589, Berlin, Deutschland
| |
Collapse
|
43
|
Warner EL, Barbati JL, Duncan KL, Yan K, Rains SA. Vaccine misinformation types and properties in Russian troll tweets. Vaccine 2022; 40:953-960. [PMID: 35034832 DOI: 10.1016/j.vaccine.2021.12.040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 12/06/2021] [Accepted: 12/14/2021] [Indexed: 11/24/2022]
Abstract
OBJECTIVE To identify the content of and engagement with vaccine misinformation from Russian trolls on Twitter. METHODS Troll tweets (N = 1959) obtained from Twitter in 2020 were coded for vaccine misinformation (α = 0.77-0.97). Descriptive, bivariate, and multivariable negative binomial regressions were applied to estimate robust incidence rate ratios (IRRs) and 95% confidence intervals (95 %CI) of vaccine misinformation associations with tweet characteristics and engagement (i.e., replies, likes, retweets). RESULTS Misinformation about personal dangers (43.0%), civil liberty violations (20.2%), and vaccine conspiracies (18.6%) were common. More misinformation tweets used anti-vaccination language (97.3% vs. 13.2%) and referenced symptoms (37.4% vs. 0.5%) than non-misinformation tweets. Fewer misinformation tweets referenced credible sources (14.0% vs. 19.5%), were formatted as headlines (39.2% vs. 77.0%), and mentioned specific vaccines (11.3% vs. 36.1%, all p < 0.01) than non-misinformation tweets. Personal dangers misinformation had 83% lower rate of retweets (95 %CI 0.04-0.66). Civil liberties misinformation had significantly higher rate of replies (IRR: 7.65, 95 %CI 1.06-55.46), but lower overall engagement (IRR: 0.38, 95 %CI 0.16-0.88) than non-misinformation tweets. CONCLUSIONS Strategies used to promote vaccine misinformation provide insight into the nature of vaccine misinformation online and public responses. Our findings suggest a need to explore influences on whether users reject or entertain online vaccine misinformation.
Collapse
Affiliation(s)
- Echo L Warner
- University of Arizona Cancer Center, 1515 N Campbell Ave, Tucson, AZ 85724, USA; College of Nursing, University of Arizona, 1350 S Martin Ave. Tucson, AZ 85721, USA.
| | - Juliana L Barbati
- Department of Communication, College of Social & Behavioral Sciences, University of Arizona, 1103 E University Blvd, Tucson, AZ 85721, USA
| | - Kaylin L Duncan
- Department of Communication, College of Social & Behavioral Sciences, University of Arizona, 1103 E University Blvd, Tucson, AZ 85721, USA
| | - Kun Yan
- Department of Communication, College of Social & Behavioral Sciences, University of Arizona, 1103 E University Blvd, Tucson, AZ 85721, USA
| | - Stephen A Rains
- Department of Communication, College of Social & Behavioral Sciences, University of Arizona, 1103 E University Blvd, Tucson, AZ 85721, USA
| |
Collapse
|
44
|
Reed G, Hendlin Y, Desikan A, MacKinney T, Berman E, Goldman GT. The disinformation playbook: how industry manipulates the science-policy process-and how to restore scientific integrity. J Public Health Policy 2021; 42:622-634. [PMID: 34811464 PMCID: PMC8651604 DOI: 10.1057/s41271-021-00318-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2021] [Indexed: 11/21/2022]
Abstract
For decades, corporate undermining of scientific consensus has eroded the scientific process worldwide. Guardrails for protecting science-informed processes, from peer review to regulatory decision making, have suffered sustained attacks, damaging public trust in the scientific enterprise and its aim to serve the public good. Government efforts to address corporate attacks have been inadequate. Researchers have cataloged corporate malfeasance that harms people's health across diverse industries. Well-known cases, like the tobacco industry's efforts to downplay the dangers of smoking, are representative of transnational industries, rather than unique. This contribution schematizes industry tactics to distort, delay, or distract the public from instituting measures that improve health-tactics that comprise the "disinformation playbook." Using a United States policy lens, we outline steps the scientific community should take to shield science from corporate interference, through individual actions (by scientists, peer reviewers, and editors) and collective initiatives (by research institutions, grant organizations, professional associations, and regulatory agencies).
Collapse
Affiliation(s)
- Genna Reed
- Union of Concerned Scientists, 1825 K Street NW, Ste 800, Washington, DC, 20006, USA.
| | - Yogi Hendlin
- Dynamics of Inclusive Prosperity Initiative, Erasmus School of Philosophy, Erasmus University, Rotterdam, The Netherlands
- Environmental Health Initiative, University of California San Francisco, San Francisco, CA, USA
| | - Anita Desikan
- Union of Concerned Scientists, 1825 K Street NW, Ste 800, Washington, DC, 20006, USA
| | - Taryn MacKinney
- Union of Concerned Scientists, 1825 K Street NW, Ste 800, Washington, DC, 20006, USA
| | - Emily Berman
- Union of Concerned Scientists, 1825 K Street NW, Ste 800, Washington, DC, 20006, USA
| | - Gretchen T Goldman
- Union of Concerned Scientists, 1825 K Street NW, Ste 800, Washington, DC, 20006, USA
| |
Collapse
|
45
|
Darius P, Urquhart M. Disinformed social movements: A large-scale mapping of conspiracy narratives as online harms during the COVID-19 pandemic. ACTA ACUST UNITED AC 2021; 26:100174. [PMID: 34642647 PMCID: PMC8495371 DOI: 10.1016/j.osnem.2021.100174] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 09/08/2021] [Accepted: 09/10/2021] [Indexed: 11/25/2022]
Abstract
The COVID-19 pandemic caused high uncertainty regarding appropriate treatments and public policy reactions. This uncertainty provided a perfect breeding ground for spreading conspiratorial anti-science narratives based on disinformation. Disinformation on public health may alter the population’s hesitance to vaccinations, counted among the ten most severe threats to global public health by the United Nations. We understand conspiracy narratives as a combination of disinformation, misinformation, and rumour that are especially effective in drawing people to believe in post-factual claims and form disinformed social movements. Conspiracy narratives provide a pseudo-epistemic background for disinformed social movements that allow for self-identification and cognitive certainty in a rapidly changing information environment. This study monitors two established conspiracy narratives and their communities on Twitter, the anti-vaccination and anti-5G communities, before and during the first UK lockdown. The study finds that, despite content moderation efforts by Twitter, conspiracy groups were able to proliferate their networks and influence broader public discourses on Twitter, such as #Lockdown in the United Kingdom.
Collapse
Affiliation(s)
- Philipp Darius
- Centre for Digital Governance, Hertie School, Berlin, Germany
| | | |
Collapse
|
46
|
Jemielniak D, Krempovych Y. An analysis of AstraZeneca COVID-19 vaccine misinformation and fear mongering on Twitter. Public Health 2021; 200:4-6. [PMID: 34628307 PMCID: PMC8494632 DOI: 10.1016/j.puhe.2021.08.019] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 07/27/2021] [Accepted: 08/23/2021] [Indexed: 02/07/2023]
Abstract
Objectives The objective of this study was to analyse the media discourse about the AstraZeneca COVID-19 vaccine on Twitter. Study design The study design used in this study is data scraping, media analysis, social network analysis, and botometer. Methods We collected 221,922 tweets containing ‘#AstraZeneca’ from 1 January 2021 to 22 March 2021. From 50,080 tweets in the English language, we analysed the linked media sources and conducted a network detection study. Results We found that the most frequently retweeted tweets were full of negative information, and in many cases came from media sources that are well-known for misinformation. Our analysis identified large coordination networks involved in political astroturfing and vaccine diplomacy in South Asia but also vaccine advocacy networks associated with European Commission employees. Conclusions The results of this study show that Twitter discourse about #AstraZeneca is filled with misinformation and bad press, and may be distributed not only organically by anti-vaxxer activists but also systematically by professional sources.
Collapse
Affiliation(s)
- D Jemielniak
- Kozminski University, Management in Networked and Digital Societies (MINDS), Warsaw, Poland; Harvard University, Berkman-Klein Center for Internet and Society, Cambridge, MA, USA.
| | - Y Krempovych
- Kozminski University, Management in Networked and Digital Societies (MINDS), Warsaw, Poland.
| |
Collapse
|
47
|
Mattei M, Caldarelli G, Squartini T, Saracco F. Italian Twitter semantic network during the Covid-19 epidemic. EPJ Data Sci 2021; 10:47. [PMID: 34518792 PMCID: PMC8427161 DOI: 10.1140/epjds/s13688-021-00301-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/19/2021] [Indexed: 05/16/2023]
Abstract
The Covid-19 pandemic has had a deep impact on the lives of the entire world population, inducing a participated societal debate. As in other contexts, the debate has been the subject of several d/misinformation campaigns; in a quite unprecedented fashion, however, the presence of false information has seriously put at risk the public health. In this sense, detecting the presence of malicious narratives and identifying the kinds of users that are more prone to spread them represent the first step to limit the persistence of the former ones. In the present paper we analyse the semantic network observed on Twitter during the first Italian lockdown (induced by the hashtags contained in approximately 1.5 millions tweets published between the 23rd of March 2020 and the 23rd of April 2020) and study the extent to which various discursive communities are exposed to d/misinformation arguments. As observed in other studies, the recovered discursive communities largely overlap with traditional political parties, even if the debated topics concern different facets of the management of the pandemic. Although the themes directly related to d/misinformation are a minority of those discussed within our semantic networks, their popularity is unevenly distributed among the various discursive communities.
Collapse
Affiliation(s)
- Mattia Mattei
- University of Salento, P.zza Tancredi 7, 73100 Lecce, Italy
- IMT School for Advanced Studies, P.zza S. Ponziano 6, 55100 Lucca, Italy
| | - Guido Caldarelli
- “Ca’ Foscari” University of Venice, Dorsoduro 3246, 30123 Venice, Italy
| | - Tiziano Squartini
- IMT School for Advanced Studies, P.zza S. Ponziano 6, 55100 Lucca, Italy
| | - Fabio Saracco
- IMT School for Advanced Studies, P.zza S. Ponziano 6, 55100 Lucca, Italy
- Institute for Applied Computing “Mauro Picone” (IAC), National Research Council, via dei Taurini 19, 00185 Rome, Italy
| |
Collapse
|
48
|
Soskolne CL, Kramer S, Ramos-Bonilla JP, Mandrioli D, Sass J, Gochfeld M, Cranor CF, Advani S, Bero LA. Toolkit for detecting misused epidemiological methods. Environ Health 2021; 20:90. [PMID: 34412643 PMCID: PMC8375462 DOI: 10.1186/s12940-021-00771-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 07/09/2021] [Indexed: 05/08/2023]
Abstract
BACKGROUND Critical knowledge of what we know about health and disease, risk factors, causation, prevention, and treatment, derives from epidemiology. Unfortunately, its methods and language can be misused and improperly applied. A repertoire of methods, techniques, arguments, and tactics are used by some people to manipulate science, usually in the service of powerful interests, and particularly those with a financial stake related to toxic agents. Such interests work to foment uncertainty, cast doubt, and mislead decision makers by seeding confusion about cause-and-effect relating to population health. We have compiled a toolkit of the methods used by those whose interests are not aligned with the public health sciences. Professional epidemiologists, as well as those who rely on their work, will thereby be more readily equipped to detect bias and flaws resulting from financial conflict-of-interest, improper study design, data collection, analysis, or interpretation, bringing greater clarity-not only to the advancement of knowledge, but, more immediately, to policy debates. METHODS The summary of techniques used to manipulate epidemiological findings, compiled as part of the 2020 Position Statement of the International Network for Epidemiology in Policy (INEP) entitled Conflict-of-Interest and Disclosure in Epidemiology, has been expanded and further elucidated in this commentary. RESULTS Some level of uncertainty is inherent in science. However, corrupted and incomplete literature contributes to confuse, foment further uncertainty, and cast doubt about the evidence under consideration. Confusion delays scientific advancement and leads to the inability of policymakers to make changes that, if enacted, would-supported by the body of valid evidence-protect, maintain, and improve public health. An accessible toolkit is provided that brings attention to the misuse of the methods of epidemiology. Its usefulness is as a compendium of what those trained in epidemiology, as well as those reviewing epidemiological studies, should identify methodologically when assessing the transparency and validity of any epidemiological inquiry, evaluation, or argument. The problems resulting from financial conflicting interests and the misuse of scientific methods, in conjunction with the strategies that can be used to safeguard public health against them, apply not only to epidemiologists, but also to other public health professionals. CONCLUSIONS This novel toolkit is for use in protecting the public. It is provided to assist public health professionals as gatekeepers of their respective specialty and subspecialty disciplines whose mission includes protecting, maintaining, and improving the public's health. It is intended to serve our roles as educators, reviewers, and researchers.
Collapse
Affiliation(s)
- Colin L Soskolne
- School of Public Health, University of Alberta, Edmonton, AB, Canada.
| | - Shira Kramer
- Epidemiology International, Hunt Valley, MD, USA
| | | | - Daniele Mandrioli
- Cesare Maltoni Cancer Research Centre, Ramazzini Institute, Bologna, Italy
| | - Jennifer Sass
- Natural Resources Defense Council, Washington, DC, USA
- George Washington University, Washington, DC, USA
| | - Michael Gochfeld
- Environmental and Occupational Health Sciences Institute, Rutgers Biomedical and Health Sciences, Newark, NJ, USA
| | - Carl F Cranor
- Departments of Philosophy and Environmental Toxicology, University of California, Riverside, CA, USA
| | - Shailesh Advani
- Terasaki Institute of Biomedical Innovation, Los Angeles, CA, USA
- Georgetown University School of Medicine, Washington, DC, USA
| | - Lisa A Bero
- Center for Bioethics and Humanities, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| |
Collapse
|
49
|
Leschzyk DK. Infodemic in Germany and Brazil: How the AfD and Jair Bolsonaro are Sowing Distrust During the Corona Pandemic. Z Literaturwissenschaft Linguist 2021; 51:477-503. [PMID: 38624953 PMCID: PMC8350913 DOI: 10.1007/s41244-021-00210-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Accepted: 06/29/2021] [Indexed: 11/24/2022]
Abstract
Even though the topic of infodemic - a blending of the words information and pandemic - emerged just in 2020 it addresses a question that has been crucial ever since in communication: How to establish - or undermine - credibility? This article deals with rhetorical techniques applied by Brazilian President Jair Messias Bolsonaro (2019-) and high-ranking politicians of the German party AfD (Alternative für Deutschland) during the COVID-19-pandemic. The analysis is based on tweets published through their official accounts during the first year of the pandemic (@jairbolsonaro, @AfD). Meanwhile Bolsonaro, who was in charge during the crisis, attacks the media claiming they would spread panic and false information, AfD, an opposition party, concentrates its criticism on the federal and state governments. The key concept credibility‹ as discussed by Ortwin Renn (2019) and basic claims that appear in Aristotle's »Rhetoric«, dating from the 4th century BC, build the theoretical basis of this study. Methodologically, the analysis is based on the Discourse-Historical Approach (Reisigl/Wodak 2001), focusing on discursive strategies of negative other representation, and a framework for studies on the language of legitimation and delegitimation developed by Theo van Leeuwen (1996).
Collapse
Affiliation(s)
- Dinah K. Leschzyk
- Institute for Romance Studies, Justus Liebig University Giessen, Gießen, Germany
| |
Collapse
|
50
|
Abstract
Trust has become a first-order concept in AI, urging experts to call for measures ensuring AI is ‘trustworthy’. The danger of untrustworthy AI often culminates with Deepfake, perceived as unprecedented threat for democracies and online trust, through its potential to back sophisticated disinformation campaigns. Little work has, however, been dedicated to the examination of the concept of trust, what undermines the arguments supporting such initiatives. By investigating the concept of trust and its evolutions, this paper ultimately defends a non-intuitive position: Deepfake is not only incapable of contributing to such an end, but also offers a unique opportunity to transition towards a framework of social trust better suited for the challenges entailed by the digital age. Discussing the dilemmas traditional societies had to overcome to establish social trust and the evolution of their solution across modernity, I come to reject rational choice theories to model trust and to distinguish an ‘instrumental rationality’ and a ‘social rationality’. This allows me to refute the argument which holds Deepfake to be a threat to online trust. In contrast, I argue that Deepfake may even support a transition from instrumental to social rationality, better suited for making decisions in the digital age.
Collapse
Affiliation(s)
- Hubert Etienne
- Facebook AI Research, Paris, France.,Department of Philosophy, Ecole Normale Supérieure, Paris, France.,Sorbonne Université, LIP6, Paris, France
| |
Collapse
|