51
|
Hutmacher F, Appel M, Schätzlein B, Mengelkamp C. Fluid intelligence but not need for cognition is associated with attitude change in response to the correction of misinformation. Cogn Res Princ Implic 2024; 9:64. [PMID: 39292332 PMCID: PMC11411052 DOI: 10.1186/s41235-024-00595-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 09/02/2024] [Indexed: 09/19/2024] Open
Abstract
Misinformation can profoundly impact an individual's attitudes-sometimes even after the misinformation has been corrected. In two preregistered experiments (N1 = 355, N2 = 725), we investigated whether individual differences in the ability and motivation to process information thoroughly influence the impact of misinformation in a news media context. More specifically, we tested whether fluid intelligence and need for cognition predicted the degree to which individuals who were exposed to misinformation changed their attitudes after receiving a correction message. We found consistent evidence that higher fluid intelligence is associated with a more pronounced correction effect, while need for cognition did not have a significant effect. This suggests that integrating a correction message with a previously encountered piece of misinformation can be challenging and that correction messages consequently need to be communicated in a way that is accessible to a broad audience.
Collapse
Affiliation(s)
- Fabian Hutmacher
- Human-Computer-Media-Institute, Psychology of Communication and New Media, Julius-Maximilians-University Würzburg, Oswald-Külpe-Weg 82, 97074, Würzburg, Germany.
| | - Markus Appel
- Human-Computer-Media-Institute, Psychology of Communication and New Media, Julius-Maximilians-University Würzburg, Oswald-Külpe-Weg 82, 97074, Würzburg, Germany
| | - Benjamin Schätzlein
- Human-Computer-Media-Institute, Psychology of Communication and New Media, Julius-Maximilians-University Würzburg, Oswald-Külpe-Weg 82, 97074, Würzburg, Germany
| | - Christoph Mengelkamp
- Human-Computer-Media-Institute, Psychology of Communication and New Media, Julius-Maximilians-University Würzburg, Oswald-Külpe-Weg 82, 97074, Würzburg, Germany
| |
Collapse
|
52
|
Nguyen NTH, Willcock S, Hassan LM. Communications enhance sustainable intentions despite other ongoing crises. SUSTAINABILITY SCIENCE 2024; 19:1997-2012. [PMID: 39526227 PMCID: PMC11543749 DOI: 10.1007/s11625-024-01556-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 08/04/2024] [Indexed: 11/16/2024]
Abstract
There is an ongoing trend toward more frequent and multiple crises. While there is a clear need for behaviors to become more sustainable to address the climate crisis, how to achieve this against the backdrop of other crises is unknown. Using a sample of 18,805 participants from the UK, we performed a survey experiment to investigate if communication messages provide a useful tool in nudging intentions toward improved sustainability in the context of the COVID-19 pandemic. We found that, despite the ongoing COVID-19 crisis, media messaging resulted in increases in sustainability-related intentions for all our communication messaging conditions. Specifically, after our communication was presented, (i) almost 80% of people who were not currently recycling their surgical masks reported their intention to do so; there was a > 70% increase in both (ii) the number of people likely to pick up face mask litter and (iii) the number of people willing to disinfect and reuse their filtering facepiece (FFP) masks 4-6 times, while (iv) there was an increase by 165% in those who would wash cloth masks at 60 °C. Our results highlight that communication messaging can play a useful role in minimizing the trade-offs between multiple crises, as well as maximizing any synergies. To support this, decision-makers and practitioners should encourage the delivery of sustainability advice via multiple sources and across different types of media, while taking steps to address potential misinformation. Supplementary Information The online version contains supplementary material available at 10.1007/s11625-024-01556-9.
Collapse
Affiliation(s)
- Ngoc T. H. Nguyen
- Bangor Business School, Bangor University, Bangor, UK
- Lincoln International Business School, University of Lincoln, Lincoln, UK
- School of Tourism, University of Economics Ho Chi Minh City, Ho Chi Minh City, Vietnam
| | - Simon Willcock
- School of Environmental and Natural Sciences, Bangor University, Bangor, UK
- Net-Zero and Resilient Farming, Rothamsted Research, Harpenden, UK
| | - Louise M. Hassan
- Birmingham Business School, University of Birmingham, Birmingham, UK
| |
Collapse
|
53
|
Bruns H, Dessart FJ, Krawczyk M, Lewandowsky S, Pantazi M, Pennycook G, Schmid P, Smillie L. Investigating the role of source and source trust in prebunks and debunks of misinformation in online experiments across four EU countries. Sci Rep 2024; 14:20723. [PMID: 39237648 PMCID: PMC11377563 DOI: 10.1038/s41598-024-71599-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 08/29/2024] [Indexed: 09/07/2024] Open
Abstract
Misinformation surrounding crises poses a significant challenge for public institutions. Understanding the relative effectiveness of different types of interventions to counter misinformation, and which segments of the population are most and least receptive to them, is crucial. We conducted a preregistered online experiment involving 5228 participants from Germany, Greece, Ireland, and Poland. Participants were exposed to misinformation on climate change or COVID-19. In addition, they were pre-emptively exposed to a prebunk, warning them of commonly used misleading strategies, before encountering the misinformation, or were exposed to a debunking intervention afterwards. The source of the intervention (i.e. the European Commission) was either revealed or not. The findings show that both interventions change four variables reflecting vulnerability to misinformation in the expected direction in almost all cases, with debunks being slightly more effective than prebunks. Revealing the source of the interventions did not significantly impact their overall effectiveness. One case of undesirable effect heterogeneity was observed: debunks with revealed sources were less effective in decreasing the credibility of misinformation for people with low levels of trust in the European Union (as elicited in a post-experimental questionnaire). While our results mostly suggest that the European Commission, and possibly other public institutions, can confidently debunk and prebunk misinformation regardless of the trust level of the recipients, further evidence on this is needed.
Collapse
Affiliation(s)
- Hendrik Bruns
- European Commission, Joint Research Centre (JRC), Rue du Champ de Mars 21, 1050, Brussels, Belgium.
| | | | - Michał Krawczyk
- European Commission, Joint Research Centre (JRC), Rue du Champ de Mars 21, 1050, Brussels, Belgium
| | - Stephan Lewandowsky
- PRODEMINFO, University of Potsdam, Potsdam, Germany
- School of Psychological Science, University of Bristol, Bristol, UK
| | - Myrto Pantazi
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | | | - Philipp Schmid
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Laura Smillie
- European Commission, Joint Research Centre (JRC), Rue du Champ de Mars 21, 1050, Brussels, Belgium
| |
Collapse
|
54
|
Groh M, Sankaranarayanan A, Singh N, Kim DY, Lippman A, Picard R. Human detection of political speech deepfakes across transcripts, audio, and video. Nat Commun 2024; 15:7629. [PMID: 39223110 PMCID: PMC11368926 DOI: 10.1038/s41467-024-51998-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 08/22/2024] [Indexed: 09/04/2024] Open
Abstract
Recent advances in technology for hyper-realistic visual and audio effects provoke the concern that deepfake videos of political speeches will soon be indistinguishable from authentic video. We conduct 5 pre-registered randomized experiments with N = 2215 participants to evaluate how accurately humans distinguish real political speeches from fabrications across base rates of misinformation, audio sources, question framings with and without priming, and media modalities. We do not find base rates of misinformation have statistically significant effects on discernment. We find deepfakes with audio produced by the state-of-the-art text-to-speech algorithms are harder to discern than the same deepfakes with voice actor audio. Moreover across all experiments and question framings, we find audio and visual information enables more accurate discernment than text alone: human discernment relies more on how something is said, the audio-visual cues, than what is said, the speech content.
Collapse
Affiliation(s)
- Matthew Groh
- Kellogg School of Management, Northwestern University, Evanston, IL, USA.
| | - Aruna Sankaranarayanan
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
- CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Nikhil Singh
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dong Young Kim
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Andrew Lippman
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rosalind Picard
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
55
|
Swirsky LT, Spaniol J. Consequences of curiosity for recognition memory in younger and older adults. Psychon Bull Rev 2024; 31:1527-1535. [PMID: 38097888 DOI: 10.3758/s13423-023-02414-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/17/2023] [Indexed: 08/29/2024]
Abstract
Older adults are more prone to false recognition than younger adults, particularly when new information is semantically related to old information. Curiosity, which guides information-seeking behavior and has beneficial effects on memory across the life span, may offer protection against false recognition, but this hypothesis has not been tested experimentally to date. The current study investigated the effect of curiosity on correct and false recognition in younger and older adults (total N = 102) using a trivia paradigm. On Day 1 of the study, participants encoded trivia questions and answers while rating their curiosity levels. On Day 2, participants completed a surprise old/new recognition test in which they saw the same trivia questions. Half of the questions were paired with old (correct) answers, and half were paired with new (incorrect) answers. New answers were either semantically related or unrelated to correct answers. For both age groups, curiosity at encoding was positively associated with correct recognition. For older adults, semantically related lures produced more false recognition than unrelated lures. However, this effect was mitigated by curiosity, such that older adults were less likely to endorse semantically related lures for high- versus low-curiosity questions. Overall, these results extend prior findings of curiosity-related memory benefits to the domain of recognition memory, and they provide novel evidence that curiosity may protect against false memory formation in older adults.
Collapse
Affiliation(s)
- Liyana T Swirsky
- Department of Psychology, Toronto Metropolitan University, 350 Victoria Street, Toronto, ON, M5B 2K3, Canada.
| | - Julia Spaniol
- Department of Psychology, Toronto Metropolitan University, 350 Victoria Street, Toronto, ON, M5B 2K3, Canada
| |
Collapse
|
56
|
Bower AH, Han N, Soni A, Eckstein MP, Steyvers M. How experts and novices judge other people's knowledgeability from language use. Psychon Bull Rev 2024; 31:1627-1637. [PMID: 38177890 PMCID: PMC11358192 DOI: 10.3758/s13423-023-02433-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/01/2023] [Indexed: 01/06/2024]
Abstract
How accurate are people in judging someone else's knowledge based on their language use, and do more knowledgeable people use different cues to make these judgments? We address this by recruiting a group of participants ("informants") to answer general knowledge questions and describe various images belonging to different categories (e.g., cartoons, basketball). A second group of participants ("evaluators") also answer general knowledge questions and decide who is more knowledgeable within pairs of informants, based on these descriptions. Evaluators perform above chance at identifying the most knowledgeable informants (65% with only one description available). The less knowledgeable evaluators base their decisions on the number of specific statements, regardless of whether the statements are true or false. The more knowledgeable evaluators treat true and false statements differently and penalize the knowledge they attribute to informants who produce specific yet false statements. Our findings demonstrate the power of a few words when assessing others' knowledge and have implications for how misinformation is processed differently between experts and novices.
Collapse
Affiliation(s)
- Alexander H Bower
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| | - Nicole Han
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, CA, USA
| | - Ansh Soni
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Miguel P Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA, USA
- Department of Computer Science, University of California, Santa Barbara, CA, USA
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, CA, USA
| | - Mark Steyvers
- Department of Cognitive Sciences, University of California, Irvine, CA, USA.
| |
Collapse
|
57
|
Ghezae I, Jordan JJ, Gainsburg IB, Mosleh M, Pennycook G, Willer R, Rand DG. Partisans neither expect nor receive reputational rewards for sharing falsehoods over truth online. PNAS NEXUS 2024; 3:pgae287. [PMID: 39192847 PMCID: PMC11348091 DOI: 10.1093/pnasnexus/pgae287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 07/10/2024] [Indexed: 08/29/2024]
Abstract
A frequently invoked explanation for the sharing of false over true political information is that partisans are motivated by their reputations. In particular, it is often argued that by indiscriminately sharing news that is favorable to one's political party, regardless of whether it is true-or perhaps especially when it is not true-partisans can signal loyalty to their group, and improve their reputations in the eyes of their online networks. Across three survey studies (total N = 3,038), and an analysis of over 26,000 tweets, we explored these hypotheses by measuring the reputational benefits that people anticipate and receive from sharing different content online. In the survey studies, we showed participants actual news headlines that varied in (ⅰ) veracity, and (ⅱ) favorability to their preferred political party. Across all three studies, participants anticipated that sharing true news would bring more reputational benefits than sharing false news. Critically, while participants also expected greater reputational benefits for sharing news favorable to their party, the perceived reputation value of veracity was no smaller for more favorable headlines. We found a similar pattern when analyzing engagement on Twitter: among headlines that were politically favorable to a user's preferred party, true headlines elicited more approval than false headlines.
Collapse
Affiliation(s)
- Isaias Ghezae
- Department of Sociology, Stanford University, Stanford, CA 94305, USA
| | - Jillian J Jordan
- Negotiation, Organizations and Markets Unit, Harvard Business School, Boston, MA 02163, USA
| | - Izzy B Gainsburg
- Department of Sociology, Stanford University, Stanford, CA 94305, USA
| | - Mohsen Mosleh
- Management Department, University of Exeter Business School, Exeter EX4 4PU, UK
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02142, USA
| | - Gordon Pennycook
- Department of Psychology, Cornell University, Ithaca, NY 14850, USA
| | - Robb Willer
- Department of Sociology, Stanford University, Stanford, CA 94305, USA
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02142, USA
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA 02142, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02142, USA
| |
Collapse
|
58
|
Anderl C, Klein SH, Sarigül B, Schneider FM, Han J, Fiedler PL, Utz S. Conversational presentation mode increases credibility judgements during information search with ChatGPT. Sci Rep 2024; 14:17127. [PMID: 39054335 PMCID: PMC11272919 DOI: 10.1038/s41598-024-67829-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024] Open
Abstract
People increasingly use large language model (LLM)-based conversational agents to obtain information. However, the information these models provide is not always factually accurate. Thus, it is critical to understand what helps users adequately assess the credibility of the provided information. Here, we report the results of two preregistered experiments in which participants rated the credibility of accurate versus partially inaccurate information ostensibly provided by a dynamic text-based LLM-powered agent, a voice-based agent, or a static text-based online encyclopedia. We found that people were better at detecting inaccuracies when identical information was provided as static text compared to both types of conversational agents, regardless of whether information search applications were branded (ChatGPT, Alexa, and Wikipedia) or unbranded. Mediation analysis overall corroborated the interpretation that a conversational nature poses a threat to adequate credibility judgments. Our research highlights the importance of presentation mode when dealing with misinformation.
Collapse
Affiliation(s)
- Christine Anderl
- Leibniz-Institut Für Wissensmedien (IWM), Schleichstraße 6, 72076, Tübingen, Germany.
| | - Stefanie H Klein
- Leibniz-Institut Für Wissensmedien (IWM), Schleichstraße 6, 72076, Tübingen, Germany
| | - Büsra Sarigül
- Leibniz-Institut Für Wissensmedien (IWM), Schleichstraße 6, 72076, Tübingen, Germany
| | - Frank M Schneider
- Leibniz-Institut Für Wissensmedien (IWM), Schleichstraße 6, 72076, Tübingen, Germany
- University of Amsterdam, Amsterdam, The Netherlands
| | - Junyi Han
- Leibniz-Institut Für Wissensmedien (IWM), Schleichstraße 6, 72076, Tübingen, Germany
| | - Paul L Fiedler
- Leibniz-Institut Für Wissensmedien (IWM), Schleichstraße 6, 72076, Tübingen, Germany
- Eberhard Karls Universität Tübingen, Tübingen, Germany
| | - Sonja Utz
- Leibniz-Institut Für Wissensmedien (IWM), Schleichstraße 6, 72076, Tübingen, Germany.
- Eberhard Karls Universität Tübingen, Tübingen, Germany.
| |
Collapse
|
59
|
Albarracin D, Oyserman D, Schwarz N. Health Communication and Behavioral Change During the COVID-19 Pandemic. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:612-623. [PMID: 38319808 PMCID: PMC11295396 DOI: 10.1177/17456916231215272] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
The COVID-19 pandemic challenged the public health system to respond to an emerging, difficult-to-understand pathogen through demanding behaviors, including staying at home, masking for long periods, and vaccinating multiple times. We discuss key challenges of the pandemic health communication efforts deployed in the United States from 2020 to 2022 and identify research priorities. One priority is communicating about uncertainty in ways that prepare the public for disagreement and likely changes in recommendations as scientific understanding advances: How can changes in understanding and recommendations foster a sense that "science works as intended" rather than "the experts are clueless" and prevent creating a void to be filled by misinformation? A second priority concerns creating a culturally fluent framework for asking people to engage in difficult and novel actions: How can health messages foster the perception that difficulties of behavior change signal that the change is important rather than that the change "is not for people like me?" A third priority entails a shift from communication strategies that focus on knowledge and attitudes to interventions that focus on norms, policy, communication about policy, and channel factors that impair behavior change: How can we move beyond educating and correcting misinformation to achieving desired actions?
Collapse
Affiliation(s)
- Dolores Albarracin
- Department of Psychology, School of Arts and Sciences, Annenberg Public Policy Center, Annenberg School for Communication, Department of Family and Community Health, Department of Health Care Management, University of Pennsylvania
| | | | | |
Collapse
|
60
|
Farooq A, Adlam A, Rutland A. Rejecting ingroup loyalty for the truth: Children's and adolescents' evaluations of deviant peers within a misinformation intergroup context. J Exp Child Psychol 2024; 243:105923. [PMID: 38593709 DOI: 10.1016/j.jecp.2024.105923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 02/05/2024] [Accepted: 03/17/2024] [Indexed: 04/11/2024]
Abstract
Typically, children and adolescents dislike peers who deviate from their peer group's norm, preferring normative peers who are loyal to the peer ingroup. Yet children and adolescents also consider whether the behavior displayed by a deviant peer aligns with generic societally valued norms when evaluating peers within intergroup contexts. In an age where misinformation is rampant online, seeking the truth exemplifies a generic norm that is widely valued but not always upheld given that individuals often show loyalty to the ingroup. The current research explored the conflict between ingroup loyalty and seeking the truth. In this study, participants (N = 266; 8-15 years old) read about their school participating in an inter-school competition where their ingroup peer either accidentally or deliberately shared misinformation about their outgroup competitor. Participants with a peer group norm of ingroup loyalty positively morally evaluated a norm deviant seeking the truth, whereas those with a peer group norm of seeking the truth negatively morally evaluated a norm deviant showing ingroup loyalty. Participants also took into account the intentions of the misinformer in their evaluations of a deviant who was either loyal or questioning toward the misinformer. Overall, this study suggests that the norm of truth-seeking is welcomed and regarded as an important value to uphold both generically and at a peer group level, even when it violates the norm of ingroup loyalty. This research provides a novel contribution to understanding how factors like norms and intentionality interact with children's and adolescents' navigation of information in an age of misinformation.
Collapse
Affiliation(s)
- Aqsa Farooq
- Department of Psychology, University of Exeter, Exeter EX4 4PY, UK.
| | - Anna Adlam
- Department of Psychology, University of Exeter, Exeter EX4 4PY, UK
| | - Adam Rutland
- Department of Psychology, University of Exeter, Exeter EX4 4PY, UK
| |
Collapse
|
61
|
Sallam M, Kareem N, Alkurtas M. The negative impact of misinformation and vaccine conspiracy on COVID-19 vaccine uptake and attitudes among the general public in Iraq. Prev Med Rep 2024; 43:102791. [PMID: 38947232 PMCID: PMC11214192 DOI: 10.1016/j.pmedr.2024.102791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 05/17/2024] [Accepted: 06/06/2024] [Indexed: 07/01/2024] Open
Abstract
BACKGROUND Vaccine hesitancy is a major barrier to infectious disease control. Previous studies showed high rates of COVID-19 vaccine hesitancy in the Middle East. The current study aimed to investigate the attitudes towards COVID-19 vaccination and COVID-19 vaccine uptake among adult population in Iraq. METHODS This self-administered survey-based study was conducted in August-September 2022. The survey instrument assessed participants' demographics, attitudes to COVID-19 vaccination, beliefs in COVID-19 misinformation, vaccine conspiracy beliefs, and sources of information regarding the vaccine. RESULTS The study sample comprised a total of 2544 individuals, with the majority reporting the uptake of at least one dose of COVID-19 vaccination (n = 2226, 87.5 %). Positive attitudes towards COVID-19 vaccination were expressed by the majority of participants (n = 1966, 77.3 %), while neutral and negative attitudes were expressed by 345 (13.6 %) and 233 (9.2 %) participants, respectively. Factors associated with positive attitudes towards COVID-19 vaccination in multivariate analysis included disbelief in COVID-19 misinformation and disagreement with vaccine conspiracies. Higher COVID-19 vaccine uptake was significantly associated with previous history of COVID-19 infection, higher income, residence outside the Capital, disbelief in COVID-19 misinformation, disagreement with vaccine conspiracies, and reliance on reputable information sources. CONCLUSION COVID-19 vaccine coverage was high among the participants, with a majority having positive attitudes towards COVID-19 vaccination. Disbelief in COVID-19 misinformation and disagreement with vaccine conspiracies were correlated with positive vaccine attitudes and higher vaccine uptake. These insights can inform targeted interventions to enhance vaccination campaigns.
Collapse
Affiliation(s)
- Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, Jordan
| | - Nariman Kareem
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan
| | - Mohammed Alkurtas
- Department of Pathology, Al-Kindy College of Medicine, University of Baghdad, Baghdad, Iraq
| |
Collapse
|
62
|
Butterworth J, Smerdon D, Baumeister R, von Hippel W. Cooperation in the Time of COVID. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:640-651. [PMID: 37384624 PMCID: PMC10311366 DOI: 10.1177/17456916231178719] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/01/2023]
Abstract
Humans evolved to be hyper-cooperative, particularly when among people who are well known to them, when relationships involve reciprocal helping opportunities, and when the costs to the helper are substantially less than the benefits to the recipient. Because humans' cooperative nature evolved over many millennia when they lived exclusively in small groups, factors that cause cooperation to break down tend to be those associated with life in large, impersonal, modern societies: when people are not identifiable, when interactions are one-off, when self-interest is not tied to the interests of others, and when people are concerned that others might free ride. From this perspective, it becomes clear that policies for managing pandemics will be most effective when they highlight superordinate goals and connect people or institutions to one another over multiple identifiable interactions. When forging such connections is not possible, policies should mimic critical components of ancestral conditions by providing reputational markers for cooperators and reducing the systemic damage caused by free riding. In this article, we review policies implemented during the pandemic, highlighting spontaneous community efforts that leveraged these aspects of people's evolved psychology, and consider implications for future decision makers.
Collapse
|
63
|
Drolsbach CP, Solovev K, Pröllochs N. Community notes increase trust in fact-checking on social media. PNAS NEXUS 2024; 3:pgae217. [PMID: 38948016 PMCID: PMC11212665 DOI: 10.1093/pnasnexus/pgae217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Accepted: 05/14/2024] [Indexed: 07/02/2024]
Abstract
Community-based fact-checking is a promising approach to fact-check social media content at scale. However, an understanding of whether users trust community fact-checks is missing. Here, we presented n = 1,810 Americans with 36 misleading and nonmisleading social media posts and assessed their trust in different types of fact-checking interventions. Participants were randomly assigned to treatments where misleading content was either accompanied by simple (i.e. context-free) misinformation flags in different formats (expert flags or community flags), or by textual "community notes" explaining why the fact-checked post was misleading. Across both sides of the political spectrum, community notes were perceived as significantly more trustworthy than simple misinformation flags. Our results further suggest that the higher trustworthiness primarily stemmed from the context provided in community notes (i.e. fact-checking explanations) rather than generally higher trust towards community fact-checkers. Community notes also improved the identification of misleading posts. In sum, our work implies that context matters in fact-checking and that community notes might be an effective approach to mitigate trust issues with simple misinformation flags.
Collapse
|
64
|
Jakob S, Hamburger K. Active consideration in an emotional context: implications for information processing. Front Psychol 2024; 15:1367714. [PMID: 38966741 PMCID: PMC11222334 DOI: 10.3389/fpsyg.2024.1367714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 06/05/2024] [Indexed: 07/06/2024] Open
Affiliation(s)
- Sophie Jakob
- Experimental Psychology and Cognitive Science, Department of Psychology, Justus Liebig University, Gießen, Germany
| | - Kai Hamburger
- Experimental Psychology and Cognitive Science, Department of Psychology, Justus Liebig University, Gießen, Germany
| |
Collapse
|
65
|
Babiker A, Alshakhsi S, Sindermann C, Montag C, Ali R. Examining the growth in willingness to pay for digital wellbeing services on social media: A comparative analysis. Heliyon 2024; 10:e32467. [PMID: 38961952 PMCID: PMC11219352 DOI: 10.1016/j.heliyon.2024.e32467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 06/02/2024] [Accepted: 06/04/2024] [Indexed: 07/05/2024] Open
Abstract
In recent years, there has been a growing need for social media platforms to offer services that preserve and promote users' digital wellbeing, including better protection of personal data and balanced technology usage. However, the current business model of social media is often seen as in conflict with users' digital wellbeing. In 2020, a study investigated users' willingness to pay monetary fees for social media digital wellbeing services. In the present work, we replicated this study in Q4 of 2022, aiming to explore any changes in interest and willingness to pay for these services. In addition, we extended the replication by conducting qualitative analysis on participants' comments to gain deeper insight and identify reasons for payment and reasons for rejecting to pay. Data were collected from 262 participants through an online questionnaire. The survey focused on four services: better data protection, less use of data for marketing, aiding users in controlling their prolonged usage, and reducing fake news and radicalisation on social media. The results showed that the willingness to pay for these services was significantly higher in 2022 compared to the results published in 2020. Participants expressed concerns about the feasibility and fairness of the alternative business model, which requires users to pay for safety and support. Our findings suggest a growing interest in digital wellbeing services, emphasizing the need for social media platforms to assess the feasibility of alternative business models, identify user segments, and take measures to enhance consumers' trust, accordingly.
Collapse
Affiliation(s)
- Areej Babiker
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Sameha Alshakhsi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Cornelia Sindermann
- Computational Digital Psychology, Interchange Forum for Reflecting on Intelligent Systems, University of Stuttgart, Stuttgart, Germany
| | - Christian Montag
- Department of Molecular Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Raian Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
66
|
Howe PDL, Perfors A, Ransom KJ, Walker B, Fay N, Kashima Y, Saletta M, Dong S. Self-certification: A novel method for increasing sharing discernment on social media. PLoS One 2024; 19:e0303025. [PMID: 38861506 PMCID: PMC11166272 DOI: 10.1371/journal.pone.0303025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 04/17/2024] [Indexed: 06/13/2024] Open
Abstract
The proliferation of misinformation on social media platforms has given rise to growing demands for effective intervention strategies that increase sharing discernment (i.e. increase the difference in the probability of sharing true posts relative to the probability of sharing false posts). One suggested method is to encourage users to deliberate on the veracity of the information prior to sharing. However, this strategy is undermined by individuals' propensity to share posts they acknowledge as false. In our study, across three experiments, in a simulated social media environment, participants were shown social media posts and asked whether they wished to share them and, sometimes, whether they believed the posts to be truthful. We observe that requiring users to verify their belief in a news post's truthfulness before sharing it markedly curtails the dissemination of false information. Thus, requiring self-certification increased sharing discernment. Importantly, requiring self-certification didn't hinder users from sharing content they genuinely believed to be true because participants were allowed to share any posts that they indicated were true. We propose self-certification as a method that substantially curbs the spread of misleading content on social media without infringing upon the principle of free speech.
Collapse
Affiliation(s)
| | - Andrew Perfors
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Keith J. Ransom
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Bradley Walker
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
- School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Perth, WA, Australia
| | - Nicolas Fay
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Yoshi Kashima
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Morgan Saletta
- Hunt Laboratory, University of Melbourne, Melbourne, VIC, Australia
| | - Sihan Dong
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
67
|
Pan W, Hu TY. More familiar, more credible? Distinguishing two types of familiarity on the truth effect using the drift-diffusion model. THE JOURNAL OF SOCIAL PSYCHOLOGY 2024; 165:402-420. [PMID: 38852171 DOI: 10.1080/00224545.2024.2363366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 05/24/2024] [Indexed: 06/11/2024]
Abstract
Familiar information is more likely to be accepted as true. This illusory truth effect has a tremendous negative impact on misinformation intervention. Previous studies focused on the familiarity from repeated exposure in the laboratory, ignoring preexisting familiarity with real-world misinformation. Over three studies (total N = 337), we investigated the cognitive mechanisms behind the truth biases from these two familiarity sources, and whether fact-checking can curb such biased truth perceptions. Studies 1 and 2 found robust truth effects induced by two sources of familiarity but with different cognitive processes. According to the cognitive process model, repetition-induced familiarity reduced decision prudence. Preexisting familiarity instead enhanced truth-congruent evidence accumulation. Study 3 showed that pre-exposing statements with warning flags eliminated the bias to truth induced by repetition but not that from preexisting familiarity. These repeated statements with warning labels also reduced decision caution. These findings furthered the understanding of how different sources of familiarity affect truth perceptions and undermine the intervention through different cognitive processes.
Collapse
Affiliation(s)
- Wanke Pan
- Shanghai Normal University
- Nanjing Normal University
| | | |
Collapse
|
68
|
Mousoulidou M, Taxitari L, Christodoulou A. Social Media News Headlines and Their Influence on Well-Being: Emotional States, Emotion Regulation, and Resilience. Eur J Investig Health Psychol Educ 2024; 14:1647-1665. [PMID: 38921075 PMCID: PMC11202588 DOI: 10.3390/ejihpe14060109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 05/22/2024] [Accepted: 06/04/2024] [Indexed: 06/27/2024] Open
Abstract
Today, many individuals read the daily news from social media platforms. Research has shown that news with negative valence might influence the well-being of individuals. Existing research that examined the impact of headlines on individuals' well-being has primarily focused on examining the positive or negative polarity of words used in the headlines. In the present study, we adopt a different approach and ask participants to categorize the headlines themselves based on the emotions they experienced while reading them and how their choice impacts their well-being. A total of 306 participants were presented with 40 headlines from main news sites that were considered popular based on the number of public reactions. Participants had to rate their emotional experience of the headlines following five emotional states (i.e., happiness, anger, sadness, fear, and interest). Emotion regulation strategies and resilience were also measured. In line with our hypotheses, we found that participants reported experiencing negative emotions more intensively while reading the headlines. Emotion regulation was not found to influence the emotional states of individuals, whereas resilience did. These findings highlight that individuals can experience heightened emotions without reading the entire news story. This effect was observed regardless of the headline's emotional valence (i.e., positive, negative, or neutral). Furthermore, our study highlights the critical role of interest as a factor in news consumption. Interest significantly affects individuals' engagement and reactions to headlines, regardless of valence. The findings underscore the complex interplay between headline content and reader engagement and stress the need for further research into how headlines are presented to protect individuals from potential emotional costs.
Collapse
Affiliation(s)
- Marilena Mousoulidou
- Department of Psychology, Neapolis University Pafos, Paphos 8042, Cyprus; (L.T.); (A.C.)
| | | | | |
Collapse
|
69
|
Machová K, Mach M, Balara V. Federated Learning in the Detection of Fake News Using Deep Learning as a Basic Method. SENSORS (BASEL, SWITZERLAND) 2024; 24:3590. [PMID: 38894381 PMCID: PMC11175327 DOI: 10.3390/s24113590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 04/20/2024] [Accepted: 05/26/2024] [Indexed: 06/21/2024]
Abstract
This article explores the possibilities for federated learning with a deep learning method as a basic approach to train detection models for fake news recognition. Federated learning is the key issue in this research because this kind of learning makes machine learning more secure by training models on decentralized data at decentralized places, for example, at different IoT edges. The data are not transformed between decentralized places, which means that personally identifiable data are not shared. This could increase the security of data from sensors in intelligent houses and medical devices or data from various resources in online spaces. Each station edge could train a model separately on data obtained from its sensors and on data extracted from different sources. Consequently, the models trained on local data on local clients are aggregated at the central ending point. We have designed three different architectures for deep learning as a basis for use within federated learning. The detection models were based on embeddings, CNNs (convolutional neural networks), and LSTM (long short-term memory). The best results were achieved using more LSTM layers (F1 = 0.92). On the other hand, all three architectures achieved similar results. We also analyzed results obtained using federated learning and without it. As a result of the analysis, it was found that the use of federated learning, in which data were decomposed and divided into smaller local datasets, does not significantly reduce the accuracy of the models.
Collapse
Affiliation(s)
- Kristína Machová
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 04200 Košice, Slovakia; (M.M.); (V.B.)
| | | | | |
Collapse
|
70
|
Robertson CE, Shariff A, Van Bavel JJ. Morality in the anthropocene: The perversion of compassion and punishment in the online world. PNAS NEXUS 2024; 3:pgae193. [PMID: 38864008 PMCID: PMC11165651 DOI: 10.1093/pnasnexus/pgae193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 05/03/2024] [Indexed: 06/13/2024]
Abstract
Although much of human morality evolved in an environment of small group living, almost 6 billion people use the internet in the modern era. We argue that the technological transformation has created an entirely new ecosystem that is often mismatched with our evolved adaptations for social living. We discuss how evolved responses to moral transgressions, such as compassion for victims of transgressions and punishment of transgressors, are disrupted by two main features of the online context. First, the scale of the internet exposes us to an unnaturally large quantity of extreme moral content, causing compassion fatigue and increasing public shaming. Second, the physical and psychological distance between moral actors online can lead to ineffective collective action and virtue signaling. We discuss practical implications of these mismatches and suggest directions for future research on morality in the internet era.
Collapse
Affiliation(s)
| | - Azim Shariff
- Department of Psychology, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Jay J Van Bavel
- Department of Psychology, New York University, New York, NY 10003, USA
- Department of Neural Science, New York University, New York, NY 10003, USA
- Department of Strategy & Management, Norwegian School of Economics, Bergen 5045, Norway
| |
Collapse
|
71
|
Speckmann F, Unkelbach C. Illusions of knowledge due to mere repetition. Cognition 2024; 247:105791. [PMID: 38593568 DOI: 10.1016/j.cognition.2024.105791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/24/2024] [Accepted: 04/03/2024] [Indexed: 04/11/2024]
Abstract
Repeating information increases people's belief that the repeated information is true. This truth effect has been widely researched and is relevant for topics such as fake news and misinformation. Another effect of repetition, which is also relevant to those topics, has not been extensively studied so far: Do people believe they knew something before it was repeated? We used a standard truth effect paradigm in four pre-registered experiments (total N = 773), including a presentation and judgment phase. However, instead of "true"/"false" judgments, participants indicated whether they knew a given trivia statement before participating in the experiment. Across all experiments, participants judged repeated information as "known" more often than novel information. Participants even judged repeated false information to know it to be false. In addition, participants also generated sources of their knowledge. The inability to distinguish recent information from well-established knowledge in memory adds an explanation for the persistence and strength of repetition effects on truth. The truth effect might be so robust because people believe to know the repeatedly presented information as a matter of fact.
Collapse
|
72
|
Kozyreva A, Lorenz-Spreen P, Herzog SM, Ecker UKH, Lewandowsky S, Hertwig R, Ali A, Bak-Coleman J, Barzilai S, Basol M, Berinsky AJ, Betsch C, Cook J, Fazio LK, Geers M, Guess AM, Huang H, Larreguy H, Maertens R, Panizza F, Pennycook G, Rand DG, Rathje S, Reifler J, Schmid P, Smith M, Swire-Thompson B, Szewach P, van der Linden S, Wineburg S. Toolbox of individual-level interventions against online misinformation. Nat Hum Behav 2024; 8:1044-1052. [PMID: 38740990 DOI: 10.1038/s41562-024-01881-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 04/05/2024] [Indexed: 05/16/2024]
Abstract
The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation. Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels.
Collapse
Affiliation(s)
- Anastasia Kozyreva
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany.
| | - Philipp Lorenz-Spreen
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Stefan M Herzog
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Ullrich K H Ecker
- School of Psychological Science & Public Policy Institute, University of Western Australia, Perth, Western Australia, Australia
| | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, Bristol, UK
- Department of Psychology, University of Potsdam, Potsdam, Germany
| | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Ayesha Ali
- Department of Economics, Lahore University of Management Sciences, Lahore, Pakistan
| | - Joe Bak-Coleman
- Craig Newmark Center, School of Journalism, Columbia University, New York, NY, USA
| | - Sarit Barzilai
- Department of Learning and Instructional Sciences, University of Haifa, Haifa, Israel
| | - Melisa Basol
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Adam J Berinsky
- Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Cornelia Betsch
- Institute for Planetary Health Behaviour, University of Erfurt, Erfurt, Germany
- Bernhard Nocht Institute for Tropical Medicine, Hamburg, Germany
| | - John Cook
- Melbourne Centre for Behaviour Change, University of Melbourne, Melbourne, Victoria, Australia
| | - Lisa K Fazio
- Department of Psychology and Human Development, Vanderbilt University, Nashville, TN, USA
| | - Michael Geers
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
- Department of Psychology, Humboldt University of Berlin, Berlin, Germany
| | - Andrew M Guess
- Department of Politics and School of Public and International Affairs, Princeton University, Princeton, NJ, USA
| | - Haifeng Huang
- Department of Political Science, Ohio State University, Columbus, OH, USA
| | - Horacio Larreguy
- Departments of Economics and Political Science, Instituto Tecnológico Autónomo de México, Mexico City, Mexico
| | - Rakoen Maertens
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | | | - Gordon Pennycook
- Department of Psychology, Cornell University, Ithaca, NY, USA
- Department of Psychology, University of Regina, Regina, Saskatchewan, Canada
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Steve Rathje
- Department of Psychology, New York University, New York, NY, USA
| | - Jason Reifler
- Department of Politics, University of Exeter, Exeter, UK
| | - Philipp Schmid
- Institute for Planetary Health Behaviour, University of Erfurt, Erfurt, Germany
- Bernhard Nocht Institute for Tropical Medicine, Hamburg, Germany
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Mark Smith
- Graduate School of Education, Stanford University, Stanford, CA, USA
| | | | - Paula Szewach
- Department of Politics, University of Exeter, Exeter, UK
- Barcelona Supercomputing Center, Barcelona, Spain
| | | | - Sam Wineburg
- Graduate School of Education, Stanford University, Stanford, CA, USA
| |
Collapse
|
73
|
Newton C, Feeney J, Pennycook G. On the Disposition to Think Analytically: Four Distinct Intuitive-Analytic Thinking Styles. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2024; 50:906-923. [PMID: 36861421 PMCID: PMC11080384 DOI: 10.1177/01461672231154886] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 01/17/2023] [Indexed: 03/03/2023]
Abstract
Many measures have been developed to index intuitive versus analytic thinking. Yet it remains an open question whether people primarily vary along a single dimension or if there are genuinely different types of thinking styles. We distinguish between four distinct types of thinking styles: Actively Open-minded Thinking, Close-Minded Thinking, Preference for Intuitive Thinking, and Preference for Effortful Thinking. We discovered strong predictive validity across several outcome measures (e.g., epistemically suspect beliefs, bullshit receptivity, empathy, moral judgments), with some subscales having stronger predictive validity for some outcomes but not others. Furthermore, Actively Open-minded Thinking, in particular, strongly outperformed the Cognitive Reflection Test in predicting misperceptions about COVID-19 and the ability to discern between vaccination-related true and false news. Our results indicate that people do, in fact, differ along multiple dimensions of intuitive-analytic thinking styles and that these dimensions have consequences for understanding a wide range of beliefs and behaviors.
Collapse
|
74
|
Allen J, Watts DJ, Rand DG. Quantifying the impact of misinformation and vaccine-skeptical content on Facebook. Science 2024; 384:eadk3451. [PMID: 38815040 DOI: 10.1126/science.adk3451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 04/17/2024] [Indexed: 06/01/2024]
Abstract
Low uptake of the COVID-19 vaccine in the US has been widely attributed to social media misinformation. To evaluate this claim, we introduce a framework combining lab experiments (total N = 18,725), crowdsourcing, and machine learning to estimate the causal effect of 13,206 vaccine-related URLs on the vaccination intentions of US Facebook users (N ≈ 233 million). We estimate that the impact of unflagged content that nonetheless encouraged vaccine skepticism was 46-fold greater than that of misinformation flagged by fact-checkers. Although misinformation reduced predicted vaccination intentions significantly more than unflagged vaccine content when viewed, Facebook users' exposure to flagged content was limited. In contrast, unflagged stories highlighting rare deaths after vaccination were among Facebook's most-viewed stories. Our work emphasizes the need to scrutinize factually accurate but potentially misleading content in addition to outright falsehoods.
Collapse
Affiliation(s)
- Jennifer Allen
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Duncan J Watts
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA, USA
- Operations, Information, and Decisions Department, University of Pennsylvania, Philadelphia, PA, USA
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
75
|
Geels J, Graßl P, Schraffenberger H, Tanis M, Kleemans M. Virtual lab coats: The effects of verified source information on social media post credibility. PLoS One 2024; 19:e0302323. [PMID: 38809822 PMCID: PMC11135712 DOI: 10.1371/journal.pone.0302323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 04/02/2024] [Indexed: 05/31/2024] Open
Abstract
Social media platform's lack of control over its content made way to the fundamental problem of misinformation. As users struggle with determining the truth, social media platforms should strive to empower users to make more accurate credibility judgements. A good starting point is a more accurate perception of the credibility of the message's source. Two pre-registered online experiments (N = 525;N = 590) were conducted to investigate how verified source information affects perceptions of Tweets (study 1) and generic social media posts (study 2). In both studies, participants reviewed posts by an unknown author and rated source and message credibility, as well as likelihood of sharing. Posts varied by the information provided about the account holder: (1) none, (2) the popular method of verified source identity, or (3) verified credential of the account holder (e.g., employer, role), a novel approach. The credential was either relevant to the content of the post or not. Study 1 presented the credential as a badge, whereas study 2 included the credential as both a badge and a signature. During an initial intuitive response, the effects of these cues were generally unpredictable. Yet, after explanation how to interpret the different source cues, two prevalent reasoning errors surfaced. First, participants conflated source authenticity and message credibility. Second, messages from sources with a verified credential were perceived as more credible, regardless of whether this credential was context relevant (i.e., virtual lab coat effect). These reasoning errors are particularly concerning in the context of misinformation. In sum, credential verification as tested in this paper seems ineffective in empowering users to make more accurate credibility judgements. Yet, future research could investigate alternative implementations of this promising technology.
Collapse
Affiliation(s)
- Jorrit Geels
- Interdisciplinary Hub on Digitisation and Society, Radboud University, Nijmegen, The Netherlands
- Institute of Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands
| | - Paul Graßl
- Interdisciplinary Hub on Digitisation and Society, Radboud University, Nijmegen, The Netherlands
| | - Hanna Schraffenberger
- Interdisciplinary Hub on Digitisation and Society, Radboud University, Nijmegen, The Netherlands
- Institute of Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands
| | - Martin Tanis
- Department of Communication Science, Vrije Universiteit, Amsterdam, The Netherlands
| | - Mariska Kleemans
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
76
|
Orosz G, Faragó L, Paskuj B, Rakovics Z, Sam-Mine D, Audemard G, Modeliar MS, Krekó P. Softly empowering a prosocial expert in the family: lasting effects of a counter-misinformation intervention in an informational autocracy. Sci Rep 2024; 14:11763. [PMID: 38782940 PMCID: PMC11116454 DOI: 10.1038/s41598-024-61232-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 05/02/2024] [Indexed: 05/25/2024] Open
Abstract
The present work is the first to comprehensively analyze the gravity of the misinformation problem in Hungary, where misinformation appears regularly in the pro-governmental, populist, and socially conservative mainstream media. In line with international data, using a Hungarian representative sample (Study 1, N = 991), we found that voters of the reigning populist, conservative party could hardly distinguish fake from real news. In Study 2, we demonstrated that a prosocial intervention of ~ 10 min (N = 801) helped young adult participants discern misinformation four weeks later compared to the control group without implementing any boosters. This effect was the most salient regarding pro-governmental conservative fake news content, leaving real news evaluations intact. Although the hypotheses of the present work were not preregistered, it appears that prosocial misinformation interventions might be promising attempts to counter misinformation in an informational autocracy in which the media is highly centralized. Despite using social motivations, it does not mean that long-term cognitive changes cannot occur. Future studies might explore exactly how these interventions can have an impact on the long-term cognitive processing of news content as well as their underlying neural structures.
Collapse
Affiliation(s)
- Gábor Orosz
- ULR 7369 -URePSSS - Unité de Recherche Pluridisciplinaire Sport Santé Société, Sherpas, Univ. Lille, Univ. Littoral Côte d'Opale, Univ. Artois, Arras, France.
| | - Laura Faragó
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Benedek Paskuj
- Department of Psychology, University College London, London, UK
| | - Zsófia Rakovics
- Faculty of Social Sciences, Research Center for Computational Social Science, ELTE Eötvös Loránd University, Budapest, Hungary
- MTA-TK Lendület "Momentum" Digital Social Science Research Group for Social Stratification, HUN-REN Centre for Social Sciences, Budapest, Hungary
| | - Diane Sam-Mine
- ULR 7369 -URePSSS - Unité de Recherche Pluridisciplinaire Sport Santé Société, Sherpas, Univ. Lille, Univ. Littoral Côte d'Opale, Univ. Artois, Arras, France
| | | | | | - Péter Krekó
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Political Capital Institute, Budapest, Hungary
| |
Collapse
|
77
|
Chuey A, Luo Y, Markman EM. Epistemic language in news headlines shapes readers' perceptions of objectivity. Proc Natl Acad Sci U S A 2024; 121:e2314091121. [PMID: 38709916 PMCID: PMC11098081 DOI: 10.1073/pnas.2314091121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 03/23/2024] [Indexed: 05/08/2024] Open
Abstract
How we reason about objectivity-whether an assertion has a ground truth-has implications for belief formation on wide-ranging topics. For example, if someone perceives climate change to be a matter of subjective opinion similar to the best movie genre, they may consider empirical claims about climate change as mere opinion and irrelevant to their beliefs. Here, we investigate whether the language employed by journalists might influence the perceived objectivity of news claims. Specifically, we ask whether factive verb framing (e.g., "Scientists know climate change is happening") increases perceived objectivity compared to nonfactive framing (e.g., "Scientists believe [...]"). Across eight studies (N = 2,785), participants read news headlines about unique, noncontroversial topics (studies 1a-b, 2a-b) or a familiar, controversial topic (climate change; studies 3a-b, 4a-b) and rated the truth and objectivity of the headlines' claims. Across all eight studies, when claims were presented as beliefs (e.g., "Tortoise breeders believe tortoises are becoming more popular pets"), people consistently judged those claims as more subjective than claims presented as knowledge (e.g., "Tortoise breeders know…"), as well as claims presented as unattributed generics (e.g., "Tortoises are becoming more popular pets"). Surprisingly, verb framing had relatively little, inconsistent influence over participants' judgments of the truth of claims. These results demonstrate how, apart from shaping whether we believe a claim is true or false, epistemic language in media can influence whether we believe a claim has an objective answer at all.
Collapse
Affiliation(s)
- Aaron Chuey
- Department of Psychology, Stanford University, Stanford, CA94305
| | - Yiwei Luo
- Department of Linguistics, Stanford University, Stanford, CA94305
| | - Ellen M. Markman
- Department of Psychology, Stanford University, Stanford, CA94305
| |
Collapse
|
78
|
Kemp PL, Sinclair AH, Adcock RA, Wahlheim CN. Memory and belief updating following complete and partial reminders of fake news. Cogn Res Princ Implic 2024; 9:28. [PMID: 38713308 PMCID: PMC11076432 DOI: 10.1186/s41235-024-00546-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 03/20/2024] [Indexed: 05/08/2024] Open
Abstract
Fake news can have enduring effects on memory and beliefs. An ongoing theoretical debate has investigated whether corrections (fact-checks) should include reminders of fake news. The familiarity backfire account proposes that reminders hinder correction (increasing interference), whereas integration-based accounts argue that reminders facilitate correction (promoting memory integration). In three experiments, we examined how different types of corrections influenced memory for and belief in news headlines. In the exposure phase, participants viewed real and fake news headlines. In the correction phase, participants viewed reminders of fake news that either reiterated the false details (complete) or prompted recall of missing false details (partial); reminders were followed by fact-checked headlines correcting the false details. Both reminder types led to proactive interference in memory for corrected details, but complete reminders produced less interference than partial reminders (Experiment 1). However, when participants had fewer initial exposures to fake news and experienced a delay between exposure and correction, this effect was reversed; partial reminders led to proactive facilitation, enhancing correction (Experiment 2). This effect occurred regardless of the delay before correction (Experiment 3), suggesting that the effects of partial reminders depend on the number of prior fake news exposures. In all experiments, memory and perceived accuracy were better when fake news and corrections were recollected, implicating a critical role for integrative encoding. Overall, we show that when memories of fake news are weak or less accessible, partial reminders are more effective for correction; when memories of fake news are stronger or more accessible, complete reminders are preferable.
Collapse
Affiliation(s)
- Paige L Kemp
- Department of Psychology, University of North Carolina at Greensboro, 296 Eberhart Building, P. O. Box 26170, Greensboro, NC, 27402-6170, USA.
| | - Alyssa H Sinclair
- Department of Psychology and Neuroscience, Duke University, Durham, NC, 27708, USA
- Center for Science, Sustainability, and the Media, University of Pennsylvania, Philadelphia, USA
| | - R Alison Adcock
- Department of Psychology and Neuroscience, Duke University, Durham, NC, 27708, USA
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, USA
| | - Christopher N Wahlheim
- Department of Psychology, University of North Carolina at Greensboro, 296 Eberhart Building, P. O. Box 26170, Greensboro, NC, 27402-6170, USA
| |
Collapse
|
79
|
Zhang Z, Cheng Z. Users' unverified information-sharing behavior on social media: The role of reasoned and social reactive pathways. Acta Psychol (Amst) 2024; 245:104215. [PMID: 38490132 DOI: 10.1016/j.actpsy.2024.104215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 02/21/2024] [Accepted: 03/11/2024] [Indexed: 03/17/2024] Open
Abstract
Unverified or false information spread by irresponsible users can amplify the dissemination of fake news or misinformation. This phenomenon may not only undermine the credibility of social media platforms but also pose severe consequences for individuals and society. This study applies and extends the prototypical willingness model with the aim of comprehending the reasons, and decision-making process driving users' unverified information-sharing behavior a reasoned and intended pathway or an impulsive and unconscious one. Data from a sample of 646 users were analyzed using Structural Equation Modeling to assess the determinative effect of both the reasoned pathway (attitude toward unverified information-sharing, subjective norm, and perceived behavioral control) and the social-reaction pathway (prototype favorability and similarity). Findings highlight the substantial role of the social-reaction pathway in forecasting users' unverified information-sharing behavior, with prototype similarity and attitude being the dominant predictors. Meanwhile, components of the reasoned pathway, specifically perceived behavior control, and attitude, also exhibited significant contributions toward predicting the behavior. In summary, while a deliberate, reasoned process has some influence, the sharing of unverified information on social media by users is primarily an intuitive, spontaneous response to specific online circumstances. This study therefore offers valuable insights that could aid relevant stakeholders in effectively regulating the spread of misinformation. Against this backdrop, highlighting potential risks associated with sharing unverified information and the negative portrayal of users propagating misinformation may contribute to the development of a more critical perspective toward online information sharing by users themselves.
Collapse
Affiliation(s)
- Zeqian Zhang
- School of Economics & Management, Beihang University, Beijing 100191, China
| | - Zhichao Cheng
- School of Economics & Management, Beihang University, Beijing 100191, China.
| |
Collapse
|
80
|
Hwang Y, Jeong SH. Gist Knowledge and Misinformation Acceptance: An Application of Fuzzy Trace Theory. HEALTH COMMUNICATION 2024; 39:937-944. [PMID: 37038244 DOI: 10.1080/10410236.2023.2197306] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Applying fuzzy trace theory to misinformation related to COVID-19, the present study (a) examines the roles of gist knowledge in predicting misinformation acceptance, and (b) further examines whether a gist cue in fact checking scales affects the level of gist knowledge. Study 1 (a survey) showed that categorical gist knowledge was negatively related to misinformation acceptance, whereas ordinal gist knowledge was not, when both types of knowledge were included in the model. In addition, Study 2 (an experiment) showed that fact checking scales containing a categorical gist cue resulted in greater categorical gist knowledge.
Collapse
Affiliation(s)
- Yoori Hwang
- Department of Digital Media, Myongji University
| | | |
Collapse
|
81
|
Vu HT, Chen Y. What Influences Audience Susceptibility to Fake Health News: An Experimental Study Using a Dual Model of Information Processing in Credibility Assessment. HEALTH COMMUNICATION 2024; 39:1113-1126. [PMID: 37095061 DOI: 10.1080/10410236.2023.2206177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
This experimental study investigates the effects of several heuristic cues and systematic factors on users' misinformation susceptibility in the context of health news. Specifically, it examines whether author credentials, writing style, and verification check flagging influence participants' intent to follow article behavioral recommendations provided by the article, perceived article credibility, and sharing intent. Findings suggest that users rely only on verification checks (passing/failing) in assessing information credibility. Of the two antecedents to systematic processing, social media self-efficacy moderates the links between verification and participants' susceptibility. Theoretical and practical implications are discussed.
Collapse
Affiliation(s)
- Hong Tien Vu
- Clyde & Betty Reed Professor of Journalism, University of Kansas
| | - Yvonnes Chen
- Clyde & Betty Reed Professor of Journalism, University of Kansas
| |
Collapse
|
82
|
Liu Q, Su F, Mu A, Wu X. Understanding Social Media Information Sharing in Individuals with Depression: Insights from the Elaboration Likelihood Model and Schema Activation Theory. Psychol Res Behav Manag 2024; 17:1587-1609. [PMID: 38628982 PMCID: PMC11020237 DOI: 10.2147/prbm.s450934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 04/02/2024] [Indexed: 04/19/2024] Open
Abstract
Purpose How individuals engage with social media can significantly impact their psychological well-being. This study examines the impact of social media interactions on mental health, grounded in the frameworks of the Elaboration Likelihood Model and Schema Activation Theory. It aims to uncover behavioral differences in information sharing between the general population and individuals with depression, while also elucidating the psychological mechanisms underlying these disparities. Methods A pre-experiment (N=30) and three experiments (Experiment 1a N=200, Experiment 1b N=180, Experiment 2 N=128) were executed online. These experiments investigated the joint effects of information quality, content valence, self-referential processing, and depression level on the intention to share information. The research design incorporated within-subject and between-subject methods, utilizing SPSS and SPSS Process to conduct independent sample t-tests, two-factor ANOVA analyses, mediation analyses, and moderated mediation analyses to test our hypotheses. Results Information quality and content valence significantly influence sharing intention. In scenarios involving low-quality information, individuals with depression are more inclined to share negative emotional content compared to the general population, and this tendency intensifies with the severity of depression. Moreover, self-referential processing acts as a mediator between emotional content and intention to share, yet this mediation effect weakens as the severity of depression rises. Conclusion Our study highlights the importance of promoting viewpoint diversity and breaking the echo chamber effect in social media to improve the mental health of individuals with depression. To achieve this goal, tailoring emotional content on social media could be a practical starting point for practice.
Collapse
Affiliation(s)
- Qiang Liu
- School of Medicine and Health Management, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, People’s Republic of China
| | - FeiFei Su
- School of Medicine and Health Management, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, People’s Republic of China
| | - Aruhan Mu
- School of Medicine and Health Management, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, People’s Republic of China
| | - Xiang Wu
- School of Medicine and Health Management, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, People’s Republic of China
- Yunnan Key Laboratory of Service Computing, Yunnan University of Finance and Economics, Kunming, 650221, People’s Republic of China
| |
Collapse
|
83
|
Unfried K, Priebe J. Who shares fake news on social media? Evidence from vaccines and infertility claims in sub-Saharan Africa. PLoS One 2024; 19:e0301818. [PMID: 38593132 PMCID: PMC11003631 DOI: 10.1371/journal.pone.0301818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 03/20/2024] [Indexed: 04/11/2024] Open
Abstract
The widespread dissemination of misinformation on social media is a serious threat to global health. To a large extent, it is still unclear who actually shares health-related misinformation deliberately and accidentally. We conducted a large-scale online survey among 5,307 Facebook users in six sub-Saharan African countries, in which we collected information on sharing of fake news and truth discernment. We estimate the magnitude and determinants of deliberate and accidental sharing of misinformation related to three vaccines (HPV, polio, and COVID-19). In an OLS framework we relate the actual sharing of fake news to several socioeconomic characteristics (age, gender, employment status, education), social media consumption, personality factors and vaccine-related characteristics while controlling for country and vaccine-specific effects. We first show that actual sharing rates of fake news articles are substantially higher than those reported from developed countries and that most of the sharing occurs accidentally. Second, we reveal that the determinants of deliberate vs. accidental sharing differ. While deliberate sharing is related to being older and risk-loving, accidental sharing is associated with being older, male, and high levels of trust in institutions. Lastly, we demonstrate that the determinants of sharing differ by the adopted measure (intentions vs. actual sharing) which underscores the limitations of commonly used intention-based measures to derive insights about actual fake news sharing behaviour.
Collapse
Affiliation(s)
- Kerstin Unfried
- Health Economics Research Group, Bernhard Nocht Institute for Tropical Medicine (BNITM), Hamburg, Germany
| | - Jan Priebe
- BNITM & Hamburg Center for Health Economics (HCHE), Hamburg, Germany
| |
Collapse
|
84
|
Gronchi G, Perini A. Limits of functional illiteracy in explaining human misinformation: the knowledge illusion, values, and the dual process theory of thought. Front Psychol 2024; 15:1381865. [PMID: 38650898 PMCID: PMC11033400 DOI: 10.3389/fpsyg.2024.1381865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 03/25/2024] [Indexed: 04/25/2024] Open
Affiliation(s)
- Giorgio Gronchi
- Section of Psychology, Department of Neuroscience, Psychology, Drug Research and Child's Health, University of Florence, Florence, Italy
| | | |
Collapse
|
85
|
Piksa M, Noworyta K, Piasecki J, Gundersen A, Kunst J, Morzy M, Rygula R. Research Report: A Link between Sertraline Treatment and Susceptibility to (Mis)information. ACS Chem Neurosci 2024; 15:1515-1522. [PMID: 38484276 DOI: 10.1021/acschemneuro.3c00825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024] Open
Abstract
Recent research revealed that several psycho-cognitive processes, such as insensitivity to positive and negative feedback, cognitive rigidity, pessimistic judgment bias, and anxiety, are involved in susceptibility to fake news. All of these processes have been previously associated with depressive disorder and are sensitive to serotoninergic manipulations. In the current study, a link between chronic treatment with the selective serotonin reuptake inhibitor (SSRI) sertraline and susceptibility to true and fake news was examined. Herein, a sample of 1162 participants was recruited via Prolific Academic for an online study. Half of the sample reported taking sertraline (Zoloft) for at least 8 weeks (sertraline group), and the other half confirmed not taking any psychiatric medication (control group). The sertraline group was further divided according to their daily dosage (50, 100, 150, and 200 mg/day). All participants completed a susceptibility to misinformation scale, wherein they were asked to determine the veracity of the presented true and fake news and their willingness to behaviorally engage with the news. The results were compared between those of the sertraline groups and the control group. The results showed that sertraline groups did not differ significantly in the assessment of the truthfulness of information or their ability to discern the truth. However, those taking sertraline appeared to have a significantly increased likelihood of behavioral engagement with the information, and this effect was observed for both true and fake news. The research presented here represents the initial endeavor to comprehend the neurochemical foundation of the susceptibility to misinformation. The association between sertraline treatment and increased behavioral engagement with information observed in this study can be explained in light of previous studies showing positive correlations between serotonin (5-HT) system activity and the inclination to engage in social behaviors. It can also be attributed to the anxiolytic effects of sertraline treatment, which mitigate the fear of social judgment. The heightened behavioral engagement with information in people taking sertraline may, as part of a general phenomenon, also shape their interactions with fake news. Future longitudinal studies should reveal the specificity and exact causality of these interactions.
Collapse
Affiliation(s)
- Michal Piksa
- Department of Pharmacology, Affective Cognitive Neuroscience Laboratory, Maj Institute of Pharmacology Polish Academy of Sciences, 12 Smetna Street, 31-343 Krakow, Poland
| | - Karolina Noworyta
- Department of Pharmacology, Affective Cognitive Neuroscience Laboratory, Maj Institute of Pharmacology Polish Academy of Sciences, 12 Smetna Street, 31-343 Krakow, Poland
| | - Jan Piasecki
- Department of Philosophy and Bioethics, Jagiellonian University Medical College, Faculty of Health Sciences, Kopernika 40, 31-501 Krakow, Poland
| | - Aleksander Gundersen
- Department of Psychology, University of Oslo, Postboks 1094, Blindern, 0317 Oslo Norway
| | - Jonas Kunst
- Department of Psychology, University of Oslo, Postboks 1094, Blindern, 0317 Oslo Norway
| | - Mikolaj Morzy
- Faculty of Computing and Telecommunications, Poznan University of Technology, Piotrowo 2, 60-965 Poznan, Poland
| | - Rafal Rygula
- Department of Pharmacology, Affective Cognitive Neuroscience Laboratory, Maj Institute of Pharmacology Polish Academy of Sciences, 12 Smetna Street, 31-343 Krakow, Poland
| |
Collapse
|
86
|
Lu C, Hu B, Bao MM, Wang C, Bi C, Ju XD. Can Media Literacy Intervention Improve Fake News Credibility Assessment? A Meta-Analysis. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2024; 27:240-252. [PMID: 38484319 DOI: 10.1089/cyber.2023.0324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Fake news impacts individuals' behavior and decision-making while also disrupting political processes, perceptions of medical advice, and societal trends. Improving individuals' ability to accurately assess fake news can reduce its harmful effects. However, previous research on media literacy interventions designed for improving fake news credibility assessments has yielded inconsistent results. We systematically collected 33 independent studies and performed a meta-analysis to examine the effects of media literacy interventions on assessing fake news credibility (n = 36,256). The results showed that media literacy interventions significantly improved fake news credibility assessments (Hedges' g = 0.53, 95% confidence interval [0.29-0.78], p < 0.001). Gaming interventions were the most effective intervention form. Conversely, the intervention channel, outcome measurement, and subject characteristics (age, gender, and country development level) did not influence the intervention effects.
Collapse
Affiliation(s)
- Chang Lu
- School of Psychology, Northeast Normal University, Changchun, China
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Changchun, China
| | - Bo Hu
- School of Psychology, Northeast Normal University, Changchun, China
| | - Meng-Meng Bao
- School of Educational Sciences, BaiCheng Normal University, Baicheng, China
| | - Chi Wang
- School of Psychology, Northeast Normal University, Changchun, China
| | - Chao Bi
- School of Psychology, Northeast Normal University, Changchun, China
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Changchun, China
| | - Xing-Da Ju
- School of Psychology, Northeast Normal University, Changchun, China
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Changchun, China
| |
Collapse
|
87
|
Martel C, Rathje S, Clark CJ, Pennycook G, Van Bavel JJ, Rand DG, van der Linden S. On the Efficacy of Accuracy Prompts Across Partisan Lines: An Adversarial Collaboration. Psychol Sci 2024; 35:435-450. [PMID: 38506937 DOI: 10.1177/09567976241232905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2024] Open
Abstract
The spread of misinformation is a pressing societal challenge. Prior work shows that shifting attention to accuracy increases the quality of people's news-sharing decisions. However, researchers disagree on whether accuracy-prompt interventions work for U.S. Republicans/conservatives and whether partisanship moderates the effect. In this preregistered adversarial collaboration, we tested this question using a multiverse meta-analysis (k = 21; N = 27,828). In all 70 models, accuracy prompts improved sharing discernment among Republicans/conservatives. We observed significant partisan moderation for single-headline "evaluation" treatments (a critical test for one research team) such that the effect was stronger among Democrats than Republicans. However, this moderation was not consistently robust across different operationalizations of ideology/partisanship, exclusion criteria, or treatment type. Overall, we observed significant partisan moderation in 50% of specifications (all of which were considered critical for the other team). We discuss the conditions under which moderation is observed and offer interpretations.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology
| | | | - Cory J Clark
- The Wharton School, University of Pennsylvania
- School of Arts and Sciences, University of Pennsylvania
| | | | | | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | | |
Collapse
|
88
|
Fernbach PM, Bogard JE. Conspiracy Theory as Individual and Group Behavior: Observations from the Flat Earth International Conference. Top Cogn Sci 2024; 16:187-205. [PMID: 37202921 DOI: 10.1111/tops.12662] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/20/2023]
Abstract
Conspiratorial thinking has been with humanity for a long time but has recently grown as a source of societal concern and as a subject of research in the cognitive and social sciences. We propose a three-tiered framework for the study of conspiracy theories: (1) cognitive processes, (2) the individual, and (3) social processes and communities of knowledge. At the level of cognitive processes, we identify explanatory coherence and faulty belief updating as critical ideas. At the level of the community of knowledge, we explore how conspiracy communities facilitate false belief by promoting a contagious sense of understanding, and how community norms catalyze the biased assimilation of evidence. We review recent research on conspiracy theories and explain how conspiratorial thinking emerges from the interaction of individual and group processes. As a case study, we describe observations the first author made while attending the Flat Earth International Conference, a meeting of conspiracy theorists who believe the Earth is flat. Rather than treating conspiracy belief as pathological, we take the perspective that is an extreme outcome of common cognitive processes.
Collapse
|
89
|
Mayo R. Trust or distrust? Neither! The right mindset for confronting disinformation. Curr Opin Psychol 2024; 56:101779. [PMID: 38134524 DOI: 10.1016/j.copsyc.2023.101779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/02/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
A primary explanation for why individuals believe disinformation is the truth bias, a predisposition to accept information as true. However, this bias is context-dependent, as research shows that rejection becomes the predominant process in a distrust mindset. Consequently, trust and distrust emerge as pivotal factors in addressing disinformation. The current review offers a more nuanced perspective by illustrating that whereas distrust may act as an antidote to the truth bias, it can also paradoxically serve as a catalyst for belief in disinformation. The review concludes that mindsets other than those rooted solely in trust (or distrust), such as an evaluative mindset, may prove to be more effective in detecting and refuting disinformation.
Collapse
Affiliation(s)
- Ruth Mayo
- The Hebrew University of Jerusalem, Israel.
| |
Collapse
|
90
|
Seitz RJ, Paloutzian RF, Angel H. Manifestations, social impact, and decay of conceptual beliefs: A cultural perspective. Brain Behav 2024; 14:e3470. [PMID: 38558538 PMCID: PMC10983810 DOI: 10.1002/brb3.3470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 01/05/2024] [Accepted: 02/21/2024] [Indexed: 04/04/2024] Open
Abstract
INTRODUCTION Believing comprises multifaceted processes that integrate information from the outside world through meaning-making processes with personal relevance. METHODS Qualitative Review of the current literature in social cognitive neuroscience. RESULTS Although believing develops rapidly outside an individual's conscious awareness, it results in the formation of beliefs that are stored in memory and play an important role in determining an individual's behavior. Primal beliefs reflect an individual's experience of objects and events, whereas conceptual beliefs are based on narratives that are held in social groups. Conceptual beliefs can be about autobiographical, political, religious, and other aspects of life and may be encouraged by participation in group rituals. We hypothesize that assertions of future gains and rewards that transcend but are inherent in these codices provide incentives to follow the norms and rules of social groups. CONCLUSION The power of conceptual beliefs to provide cultural orientation is likely to fade when circumstances and evidence make it clear that what was asserted no longer applies.
Collapse
Affiliation(s)
- Rüdiger J. Seitz
- Department of Neurology, Centre of Neurology and Neuropsychiatry, LVR‐Klinikum Düsseldorf, Medical FacultyHeinrich Heine University DüsseldorfDüsseldorfGermany
| | | | - Hans‐Ferdinand Angel
- Institute of Catechetic and Pedagogic of ReligionKarl Franzens University GrazGrazAustria
| |
Collapse
|
91
|
Prike T, Butler LH, Ecker UKH. Source-credibility information and social norms improve truth discernment and reduce engagement with misinformation online. Sci Rep 2024; 14:6900. [PMID: 38519569 PMCID: PMC10960008 DOI: 10.1038/s41598-024-57560-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 03/19/2024] [Indexed: 03/25/2024] Open
Abstract
Misinformation on social media is a pervasive challenge. In this study (N = 415) a social-media simulation was used to test two potential interventions for countering misinformation: a credibility badge and a social norm. The credibility badge was implemented by associating accounts, including participants', with a credibility score. Participants' credibility score was dynamically updated depending on their engagement with true and false posts. To implement the social-norm intervention, participants were provided with both a descriptive norm (i.e., most people do not share misinformation) and an injunctive norm (i.e., sharing misinformation is the wrong thing to do). Both interventions were effective. The social-norm intervention led to reduced belief in false claims and improved discrimination between true and false claims. It also had some positive impact on social-media engagement, although some effects were not robust to alternative analysis specifications. The presence of credibility badges led to greater belief in true claims, lower belief in false claims, and improved discrimination. The credibility-badge intervention also had robust positive impacts on social-media engagement, leading to increased flagging and decreased liking and sharing of false posts. Cumulatively, the results suggest that both interventions have potential to combat misinformation and improve the social-media information landscape.
Collapse
Affiliation(s)
- Toby Prike
- School of Psychological Science, University of Western Australia, Perth, Australia.
- School of Psychology, University of Adelaide, Adelaide, Australia.
| | - Lucy H Butler
- School of Psychological Science, University of Western Australia, Perth, Australia
| | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Perth, Australia
| |
Collapse
|
92
|
Ahmed S, Jaidka K, Chen VHH, Cai M, Chen A, Emes CS, Yu V, Chib A. Social media and anti-immigrant prejudice: a multi-method analysis of the role of social media use, threat perceptions, and cognitive ability. Front Psychol 2024; 15:1280366. [PMID: 38544515 PMCID: PMC10967952 DOI: 10.3389/fpsyg.2024.1280366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 02/21/2024] [Indexed: 11/11/2024] Open
Abstract
Introduction The discourse on immigration and immigrants is central to contemporary political and public discussions. Analyzing online conversations about immigrants provides valuable insights into public opinion, complemented by data from questionnaires on how attitudes are formed. Methods The research includes two studies examining the expressive and informational use of social media. Study 1 conducted a computational text analysis of comments on Singaporean Facebook pages and forums, focusing on how social media is used to discuss immigrants. Study 2 utilized survey data to examine the use of social media at the individual level, testing the relationships between cognitive ability, perceptions of threat, negative emotions towards immigrants, and social media usage within the Integrated Threat Theory framework. Results Study 1 found that discussions about immigrants on social media often involved negative emotions and concerns about economic impact, such as competition for jobs and crime. Complementing these findings about perceived economic threats, Study 2 showed that individuals with higher social media usage and greater perceptions of threat were more likely to have negative emotions towards immigrants. These relationships were mediated by perceptions of threat and were stronger in individuals with lower cognitive abilities. Discussion The findings from both studies demonstrate the role of social media in shaping public attitudes towards immigrants, highlighting how perceived threats influence these attitudes. This research suggests the importance of considering how digital platforms contribute to public opinion on immigration, with implications for understanding the dynamics of attitude formation in the digital age.
Collapse
Affiliation(s)
- Saifuddin Ahmed
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - Kokil Jaidka
- Department of Communications and New Media, National University of Singapore, Singapore, Singapore
| | - Vivian Hsueh Hua Chen
- Department of Media and Communication, Erasmus University Rotterdam, Rotterdam, Netherlands
| | - Mengxuan Cai
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - Anfan Chen
- Department of Communication Studies, Hong Kong Baptist University, Kowloon, Hong Kong SAR, China
| | - Claire Stravato Emes
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - Valerie Yu
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - Arul Chib
- International Institute of Social Studies, Erasmus University Rotterdam, Rotterdam, Netherlands
| |
Collapse
|
93
|
Schmitt JB, Baake J, Kero S. What means civic education in a digitalized world? Front Psychol 2024; 15:1257247. [PMID: 38529090 PMCID: PMC10961436 DOI: 10.3389/fpsyg.2024.1257247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 02/28/2024] [Indexed: 03/27/2024] Open
Abstract
The hope of reaching diverse and large target groups has motivated civic education practitioners to offer their content on social media. The question has therefore long ceased to be whether civic education should take place on the internet, but rather how civic education goals can be implemented digitally to foster civic literacy. At first glance, the possibility of reaching a broad audience in a short time seems tempting. At a second glance, social media reveals several challenges that can impair educational processes. The present paper discusses the following questions: What are the opportunities and pitfalls of civic education in social media? How can we ensure successful civic education in a digitalized world? In our article, we want to provide an interdisciplinary perspective on the topic by drawing among others from the literature in the fields of media psychology, communication studies, and education science. By integrating insights from various disciplines, our paper seeks to enrich the academic dialogue and to promote a nuanced understanding of the evolving dynamics of civic education in the digital realm. With its practical focus, our paper further aims to underscore the applicability of scientific research.
Collapse
|
94
|
Fang X, Che S, Mao M, Zhang H, Zhao M, Zhao X. Bias of AI-generated content: an examination of news produced by large language models. Sci Rep 2024; 14:5224. [PMID: 38433238 PMCID: PMC10909834 DOI: 10.1038/s41598-024-55686-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/26/2024] [Indexed: 03/05/2024] Open
Abstract
Large language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC). To harness this transformation, we need to understand the limitations of LLMs. Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA. We collect news articles from The New York Times and Reuters, both known for their dedication to provide unbiased news. We then apply each examined LLM to generate news content with headlines of these news articles as prompts, and evaluate the gender and racial biases of the AIGC produced by the LLM by comparing the AIGC and the original news articles. We further analyze the gender bias of each LLM under biased prompts by adding gender-biased messages to prompts constructed from these news headlines. Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases. Moreover, the AIGC generated by each LLM exhibits notable discrimination against females and individuals of the Black race. Among the LLMs, the AIGC generated by ChatGPT demonstrates the lowest level of bias, and ChatGPT is the sole model capable of declining content generation when provided with biased prompts.
Collapse
Affiliation(s)
- Xiao Fang
- University of Delaware, Newark, USA.
| | | | | | | | | | - Xiaohang Zhao
- Shanghai University of Finance and Economics, Shanghai, China
| |
Collapse
|
95
|
Martel C, Allen J, Pennycook G, Rand DG. Crowds Can Effectively Identify Misinformation at Scale. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:477-488. [PMID: 37594056 DOI: 10.1177/17456916231190388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
Identifying successful approaches for reducing the belief and spread of online misinformation is of great importance. Social media companies currently rely largely on professional fact-checking as their primary mechanism for identifying falsehoods. However, professional fact-checking has notable limitations regarding coverage and speed. In this article, we summarize research suggesting that the "wisdom of crowds" can be harnessed successfully to help identify misinformation at scale. Despite potential concerns about the abilities of laypeople to assess information quality, recent evidence demonstrates that aggregating judgments of groups of laypeople, or crowds, can effectively identify low-quality news sources and inaccurate news posts: Crowd ratings are strongly correlated with fact-checker ratings across a variety of studies using different designs, stimulus sets, and subject pools. We connect these experimental findings with recent attempts to deploy crowdsourced fact-checking in the field, and we close with recommendations and future directions for translating crowdsourced ratings into effective interventions.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology
| | - Jennifer Allen
- Sloan School of Management, Massachusetts Institute of Technology
| | | | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology
| |
Collapse
|
96
|
Maertens R, Götz FM, Golino HF, Roozenbeek J, Schneider CR, Kyrychenko Y, Kerr JR, Stieger S, McClanahan WP, Drabot K, He J, van der Linden S. The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment. Behav Res Methods 2024; 56:1863-1899. [PMID: 37382812 PMCID: PMC10991074 DOI: 10.3758/s13428-023-02124-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2023] [Indexed: 06/30/2023]
Abstract
Interest in the psychology of misinformation has exploded in recent years. Despite ample research, to date there is no validated framework to measure misinformation susceptibility. Therefore, we introduce Verification done, a nuanced interpretation schema and assessment tool that simultaneously considers Veracity discernment, and its distinct, measurable abilities (real/fake news detection), and biases (distrust/naïvité-negative/positive judgment bias). We then conduct three studies with seven independent samples (Ntotal = 8504) to show how to develop, validate, and apply the Misinformation Susceptibility Test (MIST). In Study 1 (N = 409) we use a neural network language model to generate items, and use three psychometric methods-factor analysis, item response theory, and exploratory graph analysis-to create the MIST-20 (20 items; completion time < 2 minutes), the MIST-16 (16 items; < 2 minutes), and the MIST-8 (8 items; < 1 minute). In Study 2 (N = 7674) we confirm the internal and predictive validity of the MIST in five national quota samples (US, UK), across 2 years, from three different sampling platforms-Respondi, CloudResearch, and Prolific. We also explore the MIST's nomological net and generate age-, region-, and country-specific norm tables. In Study 3 (N = 421) we demonstrate how the MIST-in conjunction with Verification done-can provide novel insights on existing psychological interventions, thereby advancing theory development. Finally, we outline the versatile implementations of the MIST as a screening tool, covariate, and intervention evaluation framework. As all methods are transparently reported and detailed, this work will allow other researchers to create similar scales or adapt them for any population of interest.
Collapse
Affiliation(s)
- Rakoen Maertens
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK.
| | - Friedrich M Götz
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| | | | - Jon Roozenbeek
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - Claudia R Schneider
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - Yara Kyrychenko
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - John R Kerr
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - Stefan Stieger
- Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - William P McClanahan
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
- Max Planck Institute for the Study of Crime, Security and Law, Freiburg im Breisgau, Germany
| | - Karly Drabot
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - James He
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - Sander van der Linden
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| |
Collapse
|
97
|
Schüz B, Jones C. [Mis- and disinformation in social media: mitigating risks in digital health communication]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2024; 67:300-307. [PMID: 38332143 PMCID: PMC10927781 DOI: 10.1007/s00103-024-03836-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 01/15/2024] [Indexed: 02/10/2024]
Abstract
Misinformation and disinformation in social media have become a challenge for effective public health measures. Here, we examine factors that influence believing and sharing false information, both misinformation and disinformation, at individual, social, and contextual levels and discuss intervention possibilities.At the individual level, knowledge deficits, lack of skills, and emotional motivation have been associated with believing in false information. Lower health literacy, a conspiracy mindset and certain beliefs increase susceptibility to false information. At the social level, the credibility of information sources and social norms influence the sharing of false information. At the contextual level, emotions and the repetition of messages affect belief in and sharing of false information.Interventions at the individual level involve measures to improve knowledge and skills. At the social level, addressing social processes and social norms can reduce the sharing of false information. At the contextual level, regulatory approaches involving social networks is considered an important point of intervention.Social inequalities play an important role in the exposure to and processing of misinformation. It remains unclear to which degree the susceptibility to belief in and share misinformation is an individual characteristic and/or context dependent. Complex interventions are required that should take into account multiple influencing factors.
Collapse
Affiliation(s)
- Benjamin Schüz
- Institut für Public Health und Pflegeforschung, Universität Bremen, Grazer Straße 4, 28359, Bremen, Deutschland.
- Leibniz-WissenschaftsCampus Digital Public Health, Bremen, Deutschland.
| | - Christopher Jones
- Institut für Public Health und Pflegeforschung, Universität Bremen, Grazer Straße 4, 28359, Bremen, Deutschland
- Leibniz-WissenschaftsCampus Digital Public Health, Bremen, Deutschland
- Zentrum für Präventivmedizin und Digitale Gesundheit (CPD), Medizinische Fakultät Mannheim der Universität Heidelberg, Mannheim, Deutschland
| |
Collapse
|
98
|
Butler LH, Lamont P, Wan DLY, Prike T, Nasim M, Walker B, Fay N, Ecker UKH. The (Mis)Information Game: A social media simulator. Behav Res Methods 2024; 56:2376-2397. [PMID: 37433974 PMCID: PMC10991066 DOI: 10.3758/s13428-023-02153-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/25/2023] [Indexed: 07/13/2023]
Abstract
Given the potential negative impact reliance on misinformation can have, substantial effort has gone into understanding the factors that influence misinformation belief and propagation. However, despite the rise of social media often being cited as a fundamental driver of misinformation exposure and false beliefs, how people process misinformation on social media platforms has been under-investigated. This is partially due to a lack of adaptable and ecologically valid social media testing paradigms, resulting in an over-reliance on survey software and questionnaire-based measures. To provide researchers with a flexible tool to investigate the processing and sharing of misinformation on social media, this paper presents The Misinformation Game-an easily adaptable, open-source online testing platform that simulates key characteristics of social media. Researchers can customize posts (e.g., headlines, images), source information (e.g., handles, avatars, credibility), and engagement information (e.g., a post's number of likes and dislikes). The platform allows a range of response options for participants (like, share, dislike, flag) and supports comments. The simulator can also present posts on individual pages or in a scrollable feed, and can provide customized dynamic feedback to participants via changes to their follower count and credibility score, based on how they interact with each post. Notably, no specific programming skills are required to create studies using the simulator. Here, we outline the key features of the simulator and provide a non-technical guide for use by researchers. We also present results from two validation studies. All the source code and instructions are freely available online at https://misinfogame.com .
Collapse
Affiliation(s)
- Lucy H Butler
- School of Psychological Science, University of Western Australia, Crawley, WA, Australia
| | - Padraig Lamont
- School of Engineering, University of Western Australia, Crawley, WA, Australia
| | - Dean Law Yim Wan
- School of Physics, Mathematics and Computing, University of Western Australia, Crawley, WA, Australia
| | - Toby Prike
- School of Psychological Science, University of Western Australia, Crawley, WA, Australia
| | - Mehwish Nasim
- School of Physics, Mathematics and Computing, University of Western Australia, Crawley, WA, Australia
| | - Bradley Walker
- School of Psychological Science, University of Western Australia, Crawley, WA, Australia
| | - Nicolas Fay
- School of Psychological Science, University of Western Australia, Crawley, WA, Australia
| | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Crawley, WA, Australia.
- Public Policy Institute, University of Western Australia, Crawley, WA, Australia.
| |
Collapse
|
99
|
Fiedler S, Habibnia H, Fahrenwaldt A, Rahal RM. Motivated Cognition in Cooperation. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:385-403. [PMID: 37883800 PMCID: PMC10913374 DOI: 10.1177/17456916231193990] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Successful cooperation is tightly linked to individuals' beliefs about their interaction partners, the decision setting, and existing norms, perceptions, and values. This article reviews and integrates findings from judgment and decision-making, social and cognitive psychology, political science, and economics, developing a systematic overview of the mechanisms underlying motivated cognition in cooperation. We elaborate on how theories and concepts related to motivated cognition developed in various disciplines define the concept and describe its functionality. We explain why beliefs play such an essential role in cooperation, how they can be distorted, and how this fosters or harms cooperation. We also highlight how individual differences and situational factors change the propensity to engage in motivated cognition. In the form of a construct map, we provide a visualization of the theoretical and empirical knowledge structure regarding the role of motivated cognition, including its many interdependencies, feedback loops, and moderating influences. We conclude with a brief suggestion for a future research agenda based on this compiled evidence.
Collapse
Affiliation(s)
- Susann Fiedler
- Vienna University of Economics and Business, Austria
- Max Planck Institute for Research on Collective Goods, Bonn, Germany
| | | | - Alina Fahrenwaldt
- Max Planck Institute for Research on Collective Goods, Bonn, Germany
- Faculty of Human Sciences, University of Cologne, Germany
| | - Rima-Maria Rahal
- Max Planck Institute for Research on Collective Goods, Bonn, Germany
| |
Collapse
|
100
|
Goldstone RL, Dubova M, Aiyappa R, Edinger A. The Spread of Beliefs in Partially Modularized Communities. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:404-417. [PMID: 38019565 DOI: 10.1177/17456916231198238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
Many life-influencing social networks are characterized by considerable informational isolation. People within a community are far more likely to share beliefs than people who are part of different communities. The spread of useful information across communities is impeded by echo chambers (far greater connectivity within than between communities) and filter bubbles (more influence of beliefs by connected neighbors within than between communities). We apply the tools of network analysis to organize our understanding of the spread of beliefs across modularized communities and to predict the effect of individual and group parameters on the dynamics and distribution of beliefs. In our Spread of Beliefs in Modularized Communities (SBMC) framework, a stochastic block model generates social networks with variable degrees of modularity, beliefs have different observable utilities, individuals change their beliefs on the basis of summed or average evidence (or intermediate decision rules), and parameterized stochasticity introduces randomness into decisions. SBMC simulations show surprising patterns; for example, increasing out-group connectivity does not always improve group performance, adding randomness to decisions can promote performance, and decision rules that sum rather than average evidence can improve group performance, as measured by the average utility of beliefs that the agents adopt. Overall, the results suggest that intermediate degrees of belief exploration are beneficial for the spread of useful beliefs in a community, and so parameters that pull in opposite directions on an explore-exploit continuum are usefully paired.
Collapse
Affiliation(s)
- Robert L Goldstone
- Department of Psychological and Brain Sciences, Indiana University
- Program in Cognitive Science, Indiana University
| | | | - Rachith Aiyappa
- Center for Complex Networks and Systems, Luddy School of Informatics, Computing, and Engineering, Indiana University
| | - Andy Edinger
- Program in Cognitive Science, Indiana University
- Center for Complex Networks and Systems, Luddy School of Informatics, Computing, and Engineering, Indiana University
| |
Collapse
|