1
|
Mayo R. Trust or distrust? Neither! The right mindset for confronting disinformation. Curr Opin Psychol 2024; 56:101779. [PMID: 38134524 DOI: 10.1016/j.copsyc.2023.101779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/02/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
A primary explanation for why individuals believe disinformation is the truth bias, a predisposition to accept information as true. However, this bias is context-dependent, as research shows that rejection becomes the predominant process in a distrust mindset. Consequently, trust and distrust emerge as pivotal factors in addressing disinformation. The current review offers a more nuanced perspective by illustrating that whereas distrust may act as an antidote to the truth bias, it can also paradoxically serve as a catalyst for belief in disinformation. The review concludes that mindsets other than those rooted solely in trust (or distrust), such as an evaluative mindset, may prove to be more effective in detecting and refuting disinformation.
Collapse
Affiliation(s)
- Ruth Mayo
- The Hebrew University of Jerusalem, Israel.
| |
Collapse
|
2
|
Bernhard RM, Frankland SM, Plunkett D, Sievers B, Greene JD. Evidence for Spinozan "Unbelieving" in the Right Inferior Prefrontal Cortex. J Cogn Neurosci 2023; 35:659-680. [PMID: 36638227 DOI: 10.1162/jocn_a_01964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Humans can think about possible states of the world without believing in them, an important capacity for high-level cognition. Here, we use fMRI and a novel "shell game" task to test two competing theories about the nature of belief and its neural basis. According to the Cartesian theory, information is first understood, then assessed for veracity, and ultimately encoded as either believed or not believed. According to the Spinozan theory, comprehension entails belief by default, such that understanding without believing requires an additional process of "unbelieving." Participants (n = 70) were experimentally induced to have beliefs, desires, or mere thoughts about hidden states of the shell game (e.g., believing that the dog is hidden in the upper right corner). That is, participants were induced to have specific "propositional attitudes" toward specific "propositions" in a controlled way. Consistent with the Spinozan theory, we found that thinking about a proposition without believing it is associated with increased activation of the right inferior frontal gyrus. This was true whether the hidden state was desired by the participant (because of reward) or merely thought about. These findings are consistent with a version of the Spinozan theory whereby unbelieving is an inhibitory control process. We consider potential implications of these results for the phenomena of delusional belief and wishful thinking.
Collapse
|
3
|
Li X, Li S, Li J, Yao J, Xiao X. Detection of fake-video uploaders on social media using Naive Bayesian model with social cues. Sci Rep 2021; 11:16068. [PMID: 34373531 PMCID: PMC8352884 DOI: 10.1038/s41598-021-95514-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 07/23/2021] [Indexed: 12/01/2022] Open
Abstract
With the rapid development of the Internet, the wide circulation of disinformation has considerably disrupted the search and recognition of information. Despite intensive research devoted to fake text detection, studies on fake short videos that inundate the Internet are rare. Fake videos, because of their quick transmission and broad reach, can increase misunderstanding, impact decision-making, and lead to irrevocable losses. Therefore, it is important to detect fake videos that mislead users on the Internet. Since it is difficult to detect fake videos directly, we probed the detection of fake video uploaders in this study with a vision to provide a basis for the detection of fake videos. Specifically, a dataset consisting of 450 uploaders of videos on diabetes and traditional Chinese medicine was constructed, five features of the fake video uploaders were proposed, and a Naive Bayesian model was built. Through experiments, the optimal feature combination was identified, and the proposed model reached a maximum accuracy of 70.7%.
Collapse
Affiliation(s)
- Xiaojun Li
- Xi'an Research Institute of High-Tech, Xi'an, 710025, China
| | - Shaochen Li
- Xi'an Research Institute of High-Tech, Xi'an, 710025, China
| | - Jia Li
- School of Business, East China University of Science and Technology, Shanghai, 200237, China.
| | - Junping Yao
- Xi'an Research Institute of High-Tech, Xi'an, 710025, China
| | - Xvhao Xiao
- Xi'an Research Institute of High-Tech, Xi'an, 710025, China
| |
Collapse
|
4
|
Nadarevic L, Symeonidou N, Kias A. In Colore Veritas? Color effects on the speed and accuracy of true/false responses. PSYCHOLOGICAL RESEARCH 2021; 86:919-936. [PMID: 34050785 PMCID: PMC8942928 DOI: 10.1007/s00426-021-01528-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 05/03/2021] [Indexed: 11/28/2022]
Abstract
In addition to their perceptual or aesthetic function, colors often carry conceptual meaning. In quizzes, for instance, true and false answers are typically marked in green and red. In three experiments, we used a Stroop task to investigate automatic green-true associations and red-false associations, respectively. In Experiments 1 and 2, stimuli were true statements (e.g., “tables are furniture”) and false statements (e.g., “bananas are buildings”) that were displayed in different combination of green, red, and gray depending on the experimental condition. In Experiment 3, we used true-related and false-related words shown in green, red, or gray. Participants had to indicate the validity (or semantic meaning) of each statement (or word) as fast and as accurately as possible. We expected that participants would perform best when they had to categorize green stimuli as “true” and red stimuli as “false”. The prediction was only confirmed when green and red stimuli were presented within the same context (i.e., same experimental condition). This finding supports the dimension-specificity hypothesis which states that cross-modal associations (here: associations between color and validity) depend on the context (here: the color-context). Moreover, the observed color-validity effects were stronger when participants had to categorize single words instead of sentences and when they had to provide speeded responses. Taken together, these results suggest that controlled processing counteracts the influence of automatic color associations on true/false responses.
Collapse
Affiliation(s)
- Lena Nadarevic
- Department of Psychology, School of Social Sciences, University of Mannheim, 68131, Mannheim, Germany.
| | - Nikoletta Symeonidou
- Department of Psychology, School of Social Sciences, University of Mannheim, 68131, Mannheim, Germany
| | - Alina Kias
- Department of Psychology, School of Social Sciences, University of Mannheim, 68131, Mannheim, Germany
| |
Collapse
|
5
|
Is justice blind or myopic? An examination of the effects of meta-cognitive myopia and truth bias on mock jurors and judges. JUDGMENT AND DECISION MAKING 2020. [DOI: 10.1017/s1930297500007361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractPrevious studies have shown that people are truth-biased in that they tend to believe the information they receive, even if it is clearly flagged as false. The truth bias has been recently proposed to be an instance of meta-cognitive myopia, that is, of a generalized human insensitivity towards the quality and correctness of the information available in the environment. In two studies we tested whether meta-cognitive myopia and the ensuing truth bias may operate in a courtroom setting. Based on a well-established paradigm in the truth-bias literature, we asked mock jurors (Study 1) and professional judges (Study 2) to read two crime reports containing aggravating or mitigating information that was explicitly flagged as false. Our findings suggest that jurors and judges are truth-biased, as their decisions and memory about the cases were affected by the false information. We discuss the implications of the potential operation of the truth bias in the courtroom, in the light of the literature on inadmissible and discredible evidence, and make some policy suggestions.
Collapse
|
6
|
Weil R, Mudrik L. Detecting falsehood relies on mismatch detection between sentence components. Cognition 2019; 195:104121. [PMID: 31733397 DOI: 10.1016/j.cognition.2019.104121] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 10/17/2019] [Accepted: 10/30/2019] [Indexed: 10/25/2022]
Abstract
How do people process and evaluate falsehood of sentences? Do people need to compare presented information with the correct answer to determine that a sentence is false, or do they rely on a mismatch between presented sentence components? To illustrate, when confronted with the false sentence 'trains run on highways', does one need to know that trains do not run on highways or does one need to know that trains run on tracks to reject the sentence as false? To investigate these questions, participants were asked to validate sentences that were preceded by images (Experiments 1-3) conveying a truth-congruent or a falsehood-congruent component of the sentence (e.g., an image of tracks/highway preceding the sentence 'trains run on tracks/highways') or by words (Experiment 4) that were either sentence-congruent, truth-congruent, or both (e.g., the word 'train/tracks' preceding the sentence 'trains run on tracks/highways'). Results from four experiments showed that activating sentence-congruent concepts facilitates validation for both false and true sentences but that activating truth-congruent concepts did not aid the validation of false sentences. The present findings suggest that a detection of falsehood relies on a mismatch detection between sentence's components, rather than on the activation of true content in the context of a particular sentence.
Collapse
Affiliation(s)
- Rebecca Weil
- Department of Psychology, Faculty of Health Sciences, University of Hull, HU6 7RX, United Kingdom.
| | - Liad Mudrik
- School of Psychological Sciences, Faculty of Social Sciences, Tel Aviv University, Tel Aviv, 69978, Israel; Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, 69978, Israel.
| |
Collapse
|
7
|
More evidence against the Spinozan model: Cognitive load diminishes memory for "true" feedback. Mem Cognit 2019; 47:1386-1400. [PMID: 31215012 DOI: 10.3758/s13421-019-00940-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We tested two competing models on the memory representation of truth-value information: the Spinozan model and the Cartesian model. Both models assume that truth-value information is represented with memory "tags," but the models differ in their coding scheme. According to the Cartesian model, true information is stored with a "true" tag, and false information is stored with a "false" tag. In contrast, the Spinozan model proposes that only false information receives "false" tags. All other (i.e., untagged) information is considered as true by default. Hence, in case of cognitive load during feedback encoding, the latter model predicts a load effect on memory for "false" feedback, but not on memory for "true" feedback. To test this prediction, participants studied trivia statements (Experiment 1) or nonsense statements that allegedly represented foreign-language translations (Experiment 2). After each statement, participants received feedback on the (alleged) truth value of the statement. Importantly, half of the participants experienced cognitive load during feedback processing. For the trivia statements of Experiment 1, we observed a load effect on memory for both "false" and "true" feedback. In contrast, for the nonsense statements of Experiment 2, we found a load effect on memory for "true" feedback only. Both findings clearly contradict the Spinozan model. However, our results are also only partially in line with the predictions of the Cartesian model. For this reason, we suggest a more flexible model that allows for an optional and context-dependent encoding of "true" tags and "false" tags.
Collapse
|
8
|
Nera K, Pantazi M, Klein O. "These Are Just Stories, Mulder": Exposure to Conspiracist Fiction Does Not Produce Narrative Persuasion. Front Psychol 2018; 9:684. [PMID: 29875710 PMCID: PMC5974536 DOI: 10.3389/fpsyg.2018.00684] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 04/19/2018] [Indexed: 11/13/2022] Open
Abstract
Narrative persuasion, i.e., the impact of narratives on beliefs, behaviors and attitudes, and the mechanisms underpinning endorsement of conspiracy theories have both drawn substantial attention from social scientists. Yet, to date, these two fields have evolved separately, and to our knowledge no study has empirically examined the impact of conspiracy narratives on real-world conspiracy beliefs. In a first study, we exposed a group of participants (n = 37) to an X-Files episode before asking them to fill in a questionnaire related to their narrative experience and conspiracy beliefs. A control group (n = 41) had to answer the conspiracy beliefs items before watching the episode. Based on past findings of both the aforementioned fields of research, we hypothesized that the experimental group would show greater endorsement of conspiracy beliefs, an effect expected to be mediated by identification to the episodes' characters. We furthermore hypothesized that identification would be associated with cognitive elaboration of the topics developed in the narrative. The first two hypotheses were disproved since no narrative persuasion effect was observed. In a second study, we sought to replicate these results in a larger sample (n = 166). No persuasive effect was found in the new data and a Bayesian meta-analysis of the two studies strongly supports the absence of a positive effect of exposure to narrative material on endorsement of conspiracy theories. In both studies, a significant relation between conspiracy mentality and enjoyment was observed. In the second study, this relation was fully mediated by two dimensions of perceived realism, i.e., plausibility and narrative consistency. We discuss our results, based on theoretical models of narrative persuasion and compare our studies with previous narrative persuasion studies. Implications of these results for future research are also discussed.
Collapse
Affiliation(s)
- Kenzo Nera
- Center for Social and Cultural Psychology, Université Libre de Bruxelles, Brussels, Belgium
| | - Myrto Pantazi
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Olivier Klein
- Center for Social and Cultural Psychology, Université Libre de Bruxelles, Brussels, Belgium
| |
Collapse
|