1
|
Bondielli A, Dell'Oglio P, Lenci A, Marcelloni F, Passaro L. Dataset for multimodal fake news detection and verification tasks. Data Brief 2024; 54:110440. [PMID: 38711737 PMCID: PMC11070666 DOI: 10.1016/j.dib.2024.110440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/11/2024] [Accepted: 04/11/2024] [Indexed: 05/08/2024] Open
Abstract
The proliferation of online disinformation and fake news, particularly in the context of breaking news events, demands the development of effective detection mechanisms. While textual content remains the predominant medium for disseminating misleading information, the contribution of other modalities is increasingly emerging within online outlets and social media platforms. However, multimodal datasets, which incorporate diverse modalities such as texts and images, are not very common yet, especially in low-resource languages. This study addresses this gap by releasing a dataset tailored for multimodal fake news detection in the Italian language. This dataset was originally employed in a shared task on the Italian language. The dataset is divided into two data subsets, each corresponding to a distinct sub-task. In sub-task 1, the goal is to assess the effectiveness of multimodal fake news detection systems. Sub-task 2 aims to delve into the interplay between text and images, specifically analyzing how these modalities mutually influence the interpretation of content when distinguishing between fake and real news. Both sub-tasks were managed as classification problems. The dataset consists of social media posts and news articles. After collecting it, it was labeled via crowdsourcing. Annotators were provided with external knowledge about the topic of the news to be labeled, enhancing their ability to discriminate between fake and real news. The data subsets for sub-task 1 and sub-task 2 consist of 913 and 1350 items, respectively, encompassing newspaper articles and tweets.
Collapse
Affiliation(s)
- Alessandro Bondielli
- Department of Computer Science, University of Pisa, Largo Bruno Pontecorvo, 3, 56127, Pisa, Italy
| | - Pietro Dell'Oglio
- Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino, 1, 56122, Pisa, Italy
| | - Alessandro Lenci
- Department of Philology, Literature and Linguistics, University of Pisa, Via S. Maria 36, 56127, Pisa, Italy
| | - Francesco Marcelloni
- Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino, 1, 56122, Pisa, Italy
| | - Lucia Passaro
- Department of Computer Science, University of Pisa, Largo Bruno Pontecorvo, 3, 56127, Pisa, Italy
| |
Collapse
|
2
|
Speckmann F, Unkelbach C. Illusions of knowledge due to mere repetition. Cognition 2024; 247:105791. [PMID: 38593568 DOI: 10.1016/j.cognition.2024.105791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/24/2024] [Accepted: 04/03/2024] [Indexed: 04/11/2024]
Abstract
Repeating information increases people's belief that the repeated information is true. This truth effect has been widely researched and is relevant for topics such as fake news and misinformation. Another effect of repetition, which is also relevant to those topics, has not been extensively studied so far: Do people believe they knew something before it was repeated? We used a standard truth effect paradigm in four pre-registered experiments (total N = 773), including a presentation and judgment phase. However, instead of "true"/"false" judgments, participants indicated whether they knew a given trivia statement before participating in the experiment. Across all experiments, participants judged repeated information as "known" more often than novel information. Participants even judged repeated false information to know it to be false. In addition, participants also generated sources of their knowledge. The inability to distinguish recent information from well-established knowledge in memory adds an explanation for the persistence and strength of repetition effects on truth. The truth effect might be so robust because people believe to know the repeatedly presented information as a matter of fact.
Collapse
|
3
|
Kemp PL, Sinclair AH, Adcock RA, Wahlheim CN. Memory and belief updating following complete and partial reminders of fake news. Cogn Res Princ Implic 2024; 9:28. [PMID: 38713308 DOI: 10.1186/s41235-024-00546-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 03/20/2024] [Indexed: 05/08/2024] Open
Abstract
Fake news can have enduring effects on memory and beliefs. An ongoing theoretical debate has investigated whether corrections (fact-checks) should include reminders of fake news. The familiarity backfire account proposes that reminders hinder correction (increasing interference), whereas integration-based accounts argue that reminders facilitate correction (promoting memory integration). In three experiments, we examined how different types of corrections influenced memory for and belief in news headlines. In the exposure phase, participants viewed real and fake news headlines. In the correction phase, participants viewed reminders of fake news that either reiterated the false details (complete) or prompted recall of missing false details (partial); reminders were followed by fact-checked headlines correcting the false details. Both reminder types led to proactive interference in memory for corrected details, but complete reminders produced less interference than partial reminders (Experiment 1). However, when participants had fewer initial exposures to fake news and experienced a delay between exposure and correction, this effect was reversed; partial reminders led to proactive facilitation, enhancing correction (Experiment 2). This effect occurred regardless of the delay before correction (Experiment 3), suggesting that the effects of partial reminders depend on the number of prior fake news exposures. In all experiments, memory and perceived accuracy were better when fake news and corrections were recollected, implicating a critical role for integrative encoding. Overall, we show that when memories of fake news are weak or less accessible, partial reminders are more effective for correction; when memories of fake news are stronger or more accessible, complete reminders are preferable.
Collapse
Affiliation(s)
- Paige L Kemp
- Department of Psychology, University of North Carolina at Greensboro, 296 Eberhart Building, P. O. Box 26170, Greensboro, NC, 27402-6170, USA.
| | - Alyssa H Sinclair
- Department of Psychology and Neuroscience, Duke University, Durham, NC, 27708, USA
- Center for Science, Sustainability, and the Media, University of Pennsylvania, Philadelphia, USA
| | - R Alison Adcock
- Department of Psychology and Neuroscience, Duke University, Durham, NC, 27708, USA
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, USA
| | - Christopher N Wahlheim
- Department of Psychology, University of North Carolina at Greensboro, 296 Eberhart Building, P. O. Box 26170, Greensboro, NC, 27402-6170, USA
| |
Collapse
|
4
|
Maham S, Tariq A, Khan MUG, Alamri FS, Rehman A, Saba T. ANN: adversarial news net for robust fake news classification. Sci Rep 2024; 14:7897. [PMID: 38570535 PMCID: PMC10991274 DOI: 10.1038/s41598-024-56567-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 03/08/2024] [Indexed: 04/05/2024] Open
Abstract
With easy access to social media platforms, spreading fake news has become a growing concern today. Classifying fake news is essential, as it can help prevent its negative impact on individuals and society. In this regard, an end-to-end framework for fake news detection is developed by utilizing the power of adversarial training to make the model more robust and resilient. The framework is named "ANN: Adversarial News Net," emoticons have been extracted from the datasets to understand their meanings concerning fake news. This information is then fed into the model, which helps to improve its performance in classifying fake news. The performance of the ANN framework is evaluated using four publicly available datasets, and it is found to outperform baseline methods and previous studies after adversarial training. Experiments show that Adversarial Training improved the performance by 2.1% over the Random Forest baseline and 2.4% over the BERT baseline method in terms of accuracy. The proposed framework can be used to detect fake news in real-time, thereby mitigating its harmful effects on society.
Collapse
Affiliation(s)
- Shiza Maham
- National Center of Artificial Intelligence, Al-Khawarizmi Institute of Computer Science, UET, Lahore, Pakistan
| | - Abdullah Tariq
- National Center of Artificial Intelligence, Al-Khawarizmi Institute of Computer Science, UET, Lahore, Pakistan
| | - Muhammad Usman Ghani Khan
- National Center of Artificial Intelligence, Al-Khawarizmi Institute of Computer Science, UET, Lahore, Pakistan
- Artificial Intelligence & Data Analytics Lab (AIDA) CCIS Prince Sultan University, 11586, Riyadh, Saudi Arabia
| | - Faten S Alamri
- Department of Mathematical Sciences, College of Science, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia.
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab (AIDA) CCIS Prince Sultan University, 11586, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab (AIDA) CCIS Prince Sultan University, 11586, Riyadh, Saudi Arabia
| |
Collapse
|
5
|
Brashier NM. Fighting misinformation among the most vulnerable users. Curr Opin Psychol 2024; 57:101813. [PMID: 38670015 DOI: 10.1016/j.copsyc.2024.101813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 03/04/2024] [Accepted: 03/12/2024] [Indexed: 04/28/2024]
Abstract
Misinformation undermines trust in the integrity of democratic elections, the safety of vaccines, and the authenticity of footage from war zones. Social scientists have proposed many solutions to reduce individuals' demand for fake news, but it is unclear how to evaluate them. Efficacy can mean that an intervention increases discernment (the ability to distinguish true from false content), works over a delay, scales up, and engages users. I argue that experts should also consider differences in exposure prevalence before declaring success. Misleading content makes up a small fraction of the average person's news diet, but some groups are at increased risk - conservatives and older adults see and share the most fake news. Targeting the whole population (universal prevention) could concentrate benefits among the users who already see the least misinformation to begin with. In complement to these approaches, we should design interventions for the people who need them most - conservatives and older adults (selective prevention), as well as users who have already shared low-quality news (indicated prevention).
Collapse
Affiliation(s)
- Nadia M Brashier
- Department of Psychology, University of California, San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA.
| |
Collapse
|
6
|
Maertens R, Götz FM, Golino HF, Roozenbeek J, Schneider CR, Kyrychenko Y, Kerr JR, Stieger S, McClanahan WP, Drabot K, He J, van der Linden S. The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment. Behav Res Methods 2024; 56:1863-1899. [PMID: 37382812 PMCID: PMC10991074 DOI: 10.3758/s13428-023-02124-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2023] [Indexed: 06/30/2023]
Abstract
Interest in the psychology of misinformation has exploded in recent years. Despite ample research, to date there is no validated framework to measure misinformation susceptibility. Therefore, we introduce Verification done, a nuanced interpretation schema and assessment tool that simultaneously considers Veracity discernment, and its distinct, measurable abilities (real/fake news detection), and biases (distrust/naïvité-negative/positive judgment bias). We then conduct three studies with seven independent samples (Ntotal = 8504) to show how to develop, validate, and apply the Misinformation Susceptibility Test (MIST). In Study 1 (N = 409) we use a neural network language model to generate items, and use three psychometric methods-factor analysis, item response theory, and exploratory graph analysis-to create the MIST-20 (20 items; completion time < 2 minutes), the MIST-16 (16 items; < 2 minutes), and the MIST-8 (8 items; < 1 minute). In Study 2 (N = 7674) we confirm the internal and predictive validity of the MIST in five national quota samples (US, UK), across 2 years, from three different sampling platforms-Respondi, CloudResearch, and Prolific. We also explore the MIST's nomological net and generate age-, region-, and country-specific norm tables. In Study 3 (N = 421) we demonstrate how the MIST-in conjunction with Verification done-can provide novel insights on existing psychological interventions, thereby advancing theory development. Finally, we outline the versatile implementations of the MIST as a screening tool, covariate, and intervention evaluation framework. As all methods are transparently reported and detailed, this work will allow other researchers to create similar scales or adapt them for any population of interest.
Collapse
Affiliation(s)
- Rakoen Maertens
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK.
| | - Friedrich M Götz
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| | | | - Jon Roozenbeek
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - Claudia R Schneider
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - Yara Kyrychenko
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - John R Kerr
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - Stefan Stieger
- Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - William P McClanahan
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
- Max Planck Institute for the Study of Crime, Security and Law, Freiburg im Breisgau, Germany
| | - Karly Drabot
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - James He
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| | - Sander van der Linden
- Department of Psychology, University of Cambridge, Downing Street, CB2 3EB, Cambridge, Cambridgeshire, UK
| |
Collapse
|
7
|
Raman R, Kumar Nair V, Nedungadi P, Kumar Sahu A, Kowalski R, Ramanathan S, Achuthan K. Fake news research trends, linkages to generative artificial intelligence and sustainable development goals. Heliyon 2024; 10:e24727. [PMID: 38322879 PMCID: PMC10844021 DOI: 10.1016/j.heliyon.2024.e24727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/14/2023] [Accepted: 01/12/2024] [Indexed: 02/08/2024] Open
Abstract
In the digital age, where information is a cornerstone for decision-making, social media's not-so-regulated environment has intensified the prevalence of fake news, with significant implications for both individuals and societies. This study employs a bibliometric analysis of a large corpus of 9678 publications spanning 2013-2022 to scrutinize the evolution of fake news research, identifying leading authors, institutions, and nations. Three thematic clusters emerge: Disinformation in social media, COVID-19-induced infodemics, and techno-scientific advancements in auto-detection. This work introduces three novel contributions: 1) a pioneering mapping of fake news research to Sustainable Development Goals (SDGs), indicating its influence on areas like health (SDG 3), peace (SDG 16), and industry (SDG 9); 2) the utilization of Prominence percentile metrics to discern critical and economically prioritized research areas, such as misinformation and object detection in deep learning; and 3) an evaluation of generative AI's role in the propagation and realism of fake news, raising pressing ethical concerns. These contributions collectively provide a comprehensive overview of the current state and future trajectories of fake news research, offering valuable insights for academia, policymakers, and industry.
Collapse
Affiliation(s)
- Raghu Raman
- Amrita School of Business, Amrita Vishwa Vidyapeetham, Amritapuri, Kerala, 690525, India
| | - Vinith Kumar Nair
- Amrita School of Business, Amrita Vishwa Vidyapeetham, Amritapuri, Kerala, 690525, India
| | - Prema Nedungadi
- Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amritapuri, Kerala, 690525, India
| | - Aditya Kumar Sahu
- Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amaravati, Andhra Pradesh, 522503, India
| | - Robin Kowalski
- College of Behavioral, Social and Health Sciences, Clemson University, Clemson, SC, 29634, USA
| | - Sasangan Ramanathan
- Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore, Tamilnadu, 641112, India
| | - Krishnashree Achuthan
- Center for Cybersecurity Systems and Networks, Amrita Vishwa Vidyapeetham, Amritapuri, Kerala, 690525, India
| |
Collapse
|
8
|
Ståhl T, Cusimano C. Lay standards for reasoning predict people's acceptance of suspect claims. Curr Opin Psychol 2024; 55:101727. [PMID: 38035657 DOI: 10.1016/j.copsyc.2023.101727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Revised: 10/30/2023] [Accepted: 11/03/2023] [Indexed: 12/02/2023]
Abstract
People vary between each other and across contexts with respect to how important it is to them to think in logical, impartial, and evidence-based ways. Recent studies demonstrate that this variation in people's personal standards for thinking predicts the nature and quality of their beliefs. Strong commitments to epistemic virtues motivate careful thinking and protect people from suspicious claims. At the same time, people are more likely to knowingly hold biased or evidentially unsupported beliefs when they think that they are justified to think in biased or evidentially poor ways. People's personal standards for reasoning likely play an important role in shaping how suspect or unreasonable information is received.
Collapse
|
9
|
Langdon JA, Helgason BA, Qiu J, Effron DA. "It's Not Literally True, But You Get the Gist:" How nuanced understandings of truth encourage people to condone and spread misinformation. Curr Opin Psychol 2024; 57:101788. [PMID: 38306926 DOI: 10.1016/j.copsyc.2024.101788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/10/2024] [Accepted: 01/10/2024] [Indexed: 02/04/2024]
Abstract
People have a more-nuanced view of misinformation than the binary distinction between "fake news" and "real news" implies. We distinguish between the truth of a statement's verbatim details (i.e., the specific, literal information) and its gist (i.e., the general, overarching meaning), and suggest that people tolerate and intentionally spread misinformation in part because they believe its gist. That is, even when they recognize a claim as literally false, they may judge it as morally acceptable to spread because they believe it is true "in spirit." Prior knowledge, partisanship, and imagination increase belief in the gist. We argue that partisan conflict about the morality of spreading misinformation hinges on disagreements not only about facts but also about gists.
Collapse
Affiliation(s)
- Julia A Langdon
- Organisational Behaviour Subject Area, London Business School, Regent's Park, UK.
| | - Beth Anne Helgason
- Organisational Behaviour Subject Area, London Business School, Regent's Park, UK
| | - Judy Qiu
- Organisational Behaviour Subject Area, London Business School, Regent's Park, UK
| | - Daniel A Effron
- Organisational Behaviour Subject Area, London Business School, Regent's Park, UK
| |
Collapse
|
10
|
Micallef J, Maisonneuve H, Muller S, Molimard M, Bégaud B, Cabut S, Daban M, Drici MD, Gatignol C, Grumblat A, Guaspare-Cartron C, Lasserre B, Mebarki A, Pons C, Prabonnaud F, Raynaud C, Saint-Lary O. What should be done to combat misinformation about health products? Therapie 2024; 79:87-98. [PMID: 38114387 DOI: 10.1016/j.therap.2023.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 12/21/2023]
Abstract
The increasing role of digital technology, social media, the wide range of channels and the volume of information, the role of medicine as a societal subject, public information that is insufficient and poorly suited to situations of uncertainty are all observations which led to the theme of this round table. After discussing the definition of disinformation, which is not limited to fake news, and talking about contributors who misinform, whether intentionally or not, the participants of this round table made nine recommendations (R) to combat disinformation about health products: create a collaborative platform, information/training on health products, a platform with five major characteristics, namely accessibility, flexibility, objectivity, transparency and independence, as well as media suited to the different targets (R1); promote basic knowledge on health products: education/training to restore the particularly poor image of medication, and teach the public how to use basic concepts appropriately (R2); improve communication to the public based on the observation that information is the main weapon against misinformation and entails, in particular, coordinating communication from the different institutions to make public information more audible, making institutional messages clearer, ensuring they are more factual and prioritising them (R3); know how to communicate using the correct codes and tools (R4), because, to be understood, the substance and the form are inseparable; develop research on communication in the field of health products (R5); acquire tools to identify and regulate as soon as possible (R6); keep check of content by developing critical thinking (R7); define quality criteria for information sources (R8); identify, assess and reference initiatives for the public that could be placed on the platform (R9).
Collapse
Affiliation(s)
- Joëlle Micallef
- Service de pharmacologie clinique et pharmacosurveillance, centre régional de pharmacovigilance, AP-HM, 13005 Marseille, France
| | | | - Sophie Muller
- GSK, direction médicale, 92150 Rueil-Malmaison, France
| | - Mathieu Molimard
- Service de pharmacologie médicale, CHU de Bordeaux, 33000 Bordeaux, France
| | - Bernard Bégaud
- Collège santé, université de Bordeaux, 33076 Bordeaux cedex, France
| | | | | | - Milou-Daniel Drici
- Département de pharmacologie clinique, hôpital Pasteur, université Côte d'Azur, 06001 Nice, France
| | | | - Anne Grumblat
- Pôle pharmaceutique, CHRU Jean-Minjoz, 25030 Besançon, France
| | | | - Bruno Lasserre
- Conseil d'État, 1, place du Palais-Royal, 75001 Paris, France
| | | | - Catherine Pons
- Roche, unité pharmacovigilance, 92100 Boulogne-Billancourt, France
| | | | | | - Olivier Saint-Lary
- Primary Care and Prevention Team, CESP, University Paris-Saclay, UVSQ, Inserm U1018, University Paris-Saclay, 94800 Villejuif, France
| |
Collapse
|
11
|
Micallef J, Maisonneuve H, Muller S, Molimard M, Bégaud B, Cabut S, Daban M, Drici MD, Gatignol C, Grumblat A, Guaspare-Cartron C, Lasserre B, Mebarki A, Pons C, Prabonnaud F, Raynaud C, Saint-Lary O. Quelles actions pour lutter contre la désinformation sur les produits de santé ? Therapie 2024; 79:75-86. [PMID: 37985308 DOI: 10.1016/j.therap.2023.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 11/22/2023]
Affiliation(s)
- Joëlle Micallef
- Service de pharmacologie clinique et pharmacosurveillance, centre régional de pharmacovigilance, AP-HM, 13005 Marseille, France
| | | | - Sophie Muller
- GSK, direction médicale, 92150 Rueil-Malmaison, France
| | - Mathieu Molimard
- Service de pharmacologie médicale, CHU de Bordeaux, 33000 Bordeaux, France
| | - Bernard Bégaud
- Collège santé, université de Bordeaux, 33076 Bordeaux cedex, France
| | | | | | - Milou-Daniel Drici
- Département de pharmacologie clinique, hôpital Pasteur, université Côte d'Azur, 06001 Nice, France
| | | | - Anne Grumblat
- Pôle pharmaceutique, CHRU Jean-Minjoz, 25030 Besançon, France
| | | | | | | | - Catherine Pons
- Roche, unité pharmacovigilance, 92100 Boulogne-Billancourt, France
| | | | | | - Olivier Saint-Lary
- Primary Care and Prevention Team, CESP, University Paris-Saclay, UVSQ, Inserm U1018, University Paris-Saclay, 94800 Villejuif, France
| |
Collapse
|
12
|
K V, Al-onazi BB, Simic V, Tirkolaee EB, Jana C. DeepFND: an ensemble-based deep learning approach for the optimization and improvement of fake news detection in digital platform. PeerJ Comput Sci 2023; 9:e1666. [PMID: 38192452 PMCID: PMC10773750 DOI: 10.7717/peerj-cs.1666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 10/05/2023] [Indexed: 01/10/2024]
Abstract
Early identification of false news is now essential to save lives from the dangers posed by its spread. People keep sharing false information even after it has been debunked. Those responsible for spreading misleading information in the first place should face the consequences, not the victims of their actions. Understanding how misinformation travels and how to stop it is an absolute need for society and government. Consequently, the necessity to identify false news from genuine stories has emerged with the rise of these social media platforms. One of the tough issues of conventional methodologies is identifying false news. In recent years, neural network models' performance has surpassed that of classic machine learning approaches because of their superior feature extraction. This research presents Deep learning-based Fake News Detection (DeepFND). This technique has Visual Geometry Group 19 (VGG-19) and Bidirectional Long Short Term Memory (Bi-LSTM) ensemble models for identifying misinformation spread through social media. This system uses an ensemble deep learning (DL) strategy to extract characteristics from the article's text and photos. The joint feature extractor and the attention modules are used with an ensemble approach, including pre-training and fine-tuning phases. In this article, we utilized a unique customized loss function. In this research, we look at methods for detecting bogus news on the internet without human intervention. We used the Weibo, liar, PHEME, fake and real news, and Buzzfeed datasets to analyze fake and real news. Multiple methods for identifying fake news are compared and contrasted. Precision procedures have been used to calculate the proposed model's output. The model's 99.88% accuracy is better than expected.
Collapse
Affiliation(s)
- Venkatachalam K
- Department of Applied Cybernetics, University of Hradec Králové, Hradec Kralove, Czech Republic
| | - Badriyya B. Al-onazi
- Department of Language Preparation, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Vladimir Simic
- Faculty of Transport and Traffic Engineering, University of Belgrade, Belgrade, Serbia
- Department of Industrial Engineering and Management, Yuan Ze University, Taoyuan City, Taiwan
| | - Erfan Babaee Tirkolaee
- Department of Industrial Engineering, Istinye University, Istanbul, Turkey
- MEU Research Unit, Middle East University, Amman, Jordan
| | - Chiranjibe Jana
- Department of Applied Mathematics with Oceanology and Computer Programming, Vidyasagar University, Midnapore, India
| |
Collapse
|
13
|
Capocasa M, Venier D. Opening scientific knowledge to debunk myths and lies in human nutrition. Eur J Nutr 2023; 62:3447-3449. [PMID: 37532889 DOI: 10.1007/s00394-023-03228-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/27/2023] [Indexed: 08/04/2023]
Affiliation(s)
- Marco Capocasa
- Istituto Italiano di Antropologia, Piazzale Aldo Moro 5, 00185, Rome, Italy.
| | | |
Collapse
|
14
|
Leon CS, Bonilla M, Brusco LI, Forcato C, Benítez FU. Fake news and false memory formation in the psychology debate. IBRO Neurosci Rep 2023; 15:24-30. [PMID: 37359499 PMCID: PMC10285207 DOI: 10.1016/j.ibneur.2023.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 06/07/2023] [Indexed: 06/28/2023] Open
Abstract
Fake news can generate memory distortions and influence people's behavior. Within the framework of the great debates, the tendency to generate false memories from fake news seems to be modulated by the ideological alignment of each individual. This effect has been observed mainly around issues involving large sectors of society, but little is known about its impact on smaller-scale discussions focused on more specific populations. In this work we examine the formation of false memories from fake news in the debate between psychological currents in Argentina. For this, 326 individuals aligned to psychoanalysis (PSA) or Evidence-Based Practices (EBP) observed a series of news (12 true and 8 fabricated). The EBP group remembered or believed more fake news that damaged PSA. They also remembered with greater precision the statements of the news that harmed their own school, than those referring to others. These results could be understood as the product of an imbalance in the commitment between the different parties, since the group that proposes the paradigm shift (EBP) exhibited a congruence effect, while the group whose orientation is hegemonic in this field (PSA) did not show any effect of ideological alignment. The fact that the congruence effect is manifested to some extent in settings as relevant as the education of mental health professionals, highlights the need to move towards more careful practices in the consumption and production of media.
Collapse
Affiliation(s)
- Candela S. Leon
- Laboratorio de Sueño y Memoria, Departamento de Ciencias de la Vida, Instituto Tecnológico de Buenos Aires (ITBA), Buenos Aires, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Buenos Aires, Argentina
| | - Matías Bonilla
- Laboratorio de Sueño y Memoria, Departamento de Ciencias de la Vida, Instituto Tecnológico de Buenos Aires (ITBA), Buenos Aires, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Buenos Aires, Argentina
| | - Luis I. Brusco
- CENECON, Centro de Neuropsiquiatría y Neurología de la Conducta (CENECON), Buenos Aires, Argentina
| | - Cecilia Forcato
- Laboratorio de Sueño y Memoria, Departamento de Ciencias de la Vida, Instituto Tecnológico de Buenos Aires (ITBA), Buenos Aires, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Buenos Aires, Argentina
| | - Facundo Urreta Benítez
- Laboratorio de Sueño y Memoria, Departamento de Ciencias de la Vida, Instituto Tecnológico de Buenos Aires (ITBA), Buenos Aires, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Buenos Aires, Argentina
- Innocence Project Argentina, Buenos Aires, Argentina
| |
Collapse
|
15
|
Prike T, Ecker UKH. Effective correction of misinformation. Curr Opin Psychol 2023; 54:101712. [PMID: 37944323 DOI: 10.1016/j.copsyc.2023.101712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/14/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023]
Abstract
This paper reviews correction effectiveness, highlighting which factors matter, which do not, and where further research is needed. To boost effectiveness, we recommend using detailed corrections and providing an alternative explanation wherever possible. We also recommend providing a reminder of the initial misinformation and repeating the correction. Presenting corrections pre-emptively (i.e., prebunking) or after misinformation exposure is unlikely to greatly impact correction effectiveness. There is also limited risk of repeating misinformation within a correction or that a correction will inadvertently spread misinformation to new audiences. Further research is needed into which correction formats are most effective, whether boosting correction memorability can enhance effectiveness, the effectiveness of discrediting a misinformation source, and whether distrusted correction sources can contribute to corrections backfiring.
Collapse
Affiliation(s)
- Toby Prike
- School of Psychological Science, University of Western Australia, Perth, Australia.
| | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Perth, Australia
| |
Collapse
|
16
|
Martire KA, Robson SG, Drew M, Nicholls K, Faasse K. Thinking false and slow: Implausible beliefs and the Cognitive Reflection Test. Psychon Bull Rev 2023; 30:2387-2396. [PMID: 37369977 PMCID: PMC10728225 DOI: 10.3758/s13423-023-02321-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2023] [Indexed: 06/29/2023]
Abstract
Why do people believe implausible claims like conspiracy theories, pseudoscience, and fake news? Past studies using the Cognitive Reflection Test (CRT) suggest that implausible beliefs may result from an unwillingness to effortfully process information (i.e., cognitive miserliness). Our analysis (N = 664) tests this account by comparing CRT performance (total score, number and proportion of incorrect intuitive responses, and completion time) for endorsers and non-endorsers of implausible claims. Our results show that endorsers performed worse than non-endorsers on the CRT, but they took significantly longer to answer the questions and did not make proportionally more intuitive mistakes. Endorsers therefore appear to process information effortfully but nonetheless score lower on the CRT. Poorer overall CRT performance may not necessarily indicate that those who endorse implausible beliefs have a more reflexive, intuitive, or non-analytical cognitive style than non-endorsers.
Collapse
Affiliation(s)
- Kristy A Martire
- The University of New South Wales, Kensington, NSW, 2052, Australia
| | - Samuel G Robson
- The University of New South Wales, Kensington, NSW, 2052, Australia.
| | - Manisara Drew
- The University of New South Wales, Kensington, NSW, 2052, Australia
| | - Kate Nicholls
- The University of New South Wales, Kensington, NSW, 2052, Australia
| | - Kate Faasse
- The University of New South Wales, Kensington, NSW, 2052, Australia
| |
Collapse
|
17
|
Gong C, Ren Y. PTSD, FOMO and fake news beliefs: a cross-sectional study of Wenchuan earthquake survivors. BMC Public Health 2023; 23:2213. [PMID: 37946134 PMCID: PMC10636930 DOI: 10.1186/s12889-023-17151-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 11/04/2023] [Indexed: 11/12/2023] Open
Abstract
BACKGROUND Post-traumatic stress disorder (PTSD) sufferers show problematic patterns of Internet use such as fear of missing out (FOMO) and sharing misinformation and fake news. This study aimed to investigate these associations in survivors of the 2008 earthquake in Wenchuan, China. METHODS A self-reported survey was completed by 356 survivors of the Wenchuan earthquake. A mediated structural equation model was constructed to test a proposed pattern of associations with FOMO as a mediator of the relationship between PTSD symptoms and belief in fake news, as well as moderators of this pathway. RESULTS PTSD was directly associated with believing fake news (β = 0.444, p < .001) and with FOMO (β = 0.347, p < .001). FOMO mediated the association between PTSD and fake news belief (β = 0.373, p < .001). Age moderated the direct (β = 0.148, t = 3.097, p = .002) and indirect (β = 0.145, t = 3.122, p = .002) pathways, with effects more pronounced with increasing age. Gender was also a moderator, with the indirect effect present in females but not in males (β = 0.281, t = 6.737, p < .001). CONCLUSION Those with higher PTSD symptoms are more likely to believe fake news and this is partly explained by FOMO. This effect is present in females and not males and is stronger in older people. Findings extend knowledge of the role of psychological variables in problematic Internet use among those with PTSD.
Collapse
Affiliation(s)
- Chen Gong
- School of Journalism, Fudan University, No. 440, Handan Road, Shanghai, 200433, China.
| | - Yijin Ren
- Mianyang College of Administration, Sichuan, China
| |
Collapse
|
18
|
Graham ME, Skov B, Gilson Z, Heise C, Fallow KM, Mah EY, Lindsay DS. Mixed News about the Bad News Game. J Cogn 2023; 6:58. [PMID: 37841671 PMCID: PMC10573624 DOI: 10.5334/joc.324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 09/21/2023] [Indexed: 10/17/2023] Open
Abstract
Basol et al. (2020) tested the "the Bad News Game" (BNG), an app designed to improve ability to spot false claims on social media. Participants rated simulated Tweets, then played either the BNG or an unrelated game, then re-rated the Tweets. Playing the BNG lowered rated belief in false Tweets. Here, four teams of undergraduate psychology students each attempted an extended replication of Basol et al., using updated versions of the original Bad News game. The most important extension was that the replications included a larger number of true Tweets than the original study and planned analyses of responses to true Tweets. The four replications were loosely coordinated, with each team independently working out how to implement the agreed plan. Despite many departures from the Basol et al. method, all four teams replicated their key finding: Playing the BNG reduced belief in false Tweets. But playing the BNG also reduced belief in true Tweets to the same or almost the same extent. Exploratory signal detection theory analyses indicated that the BNG increased response bias but did not improve discrimination. This converges with findings reported by Modirrousta-Galian and Higham (2023).
Collapse
Affiliation(s)
| | | | - Zoë Gilson
- Psychology, University of Victoria, Canada
| | | | | | | | | |
Collapse
|
19
|
Alghamdi J, Lin Y, Luo S. Towards COVID-19 fake news detection using transformer-based models. Knowl Based Syst 2023; 274:110642. [PMID: 37250528 PMCID: PMC10197436 DOI: 10.1016/j.knosys.2023.110642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 04/17/2023] [Accepted: 05/13/2023] [Indexed: 05/31/2023]
Abstract
The COVID-19 pandemic has resulted in a surge of fake news, creating public health risks. However, developing an effective way to detect such news is challenging, especially when published news involves mixing true and false information. Detecting COVID-19 fake news has become a critical task in the field of natural language processing (NLP). This paper explores the effectiveness of several machine learning algorithms and fine-tuning pre-trained transformer-based models, including Bidirectional Encoder Representations from Transformers (BERT) and COVID-Twitter-BERT (CT-BERT), for COVID-19 fake news detection. We evaluate the performance of different downstream neural network structures, such as CNN and BiGRU layers, added on top of BERT and CT-BERT with frozen or unfrozen parameters. Our experiments on a real-world COVID-19 fake news dataset demonstrate that incorporating BiGRU on top of the CT-BERT model achieves outstanding performance, with a state-of-the-art F1 score of 98%. These results have significant implications for mitigating the spread of COVID-19 misinformation and highlight the potential of advanced machine learning models for fake news detection.
Collapse
Affiliation(s)
- Jawaher Alghamdi
- School of Information and Physical Sciences, The University of Newcastle, Newcastle, Australia
- Department of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Yuqing Lin
- School of Information and Physical Sciences, The University of Newcastle, Newcastle, Australia
| | - Suhuai Luo
- School of Information and Physical Sciences, The University of Newcastle, Newcastle, Australia
| |
Collapse
|
20
|
Wellner G, Mykhailov D. Caring in an Algorithmic World: Ethical Perspectives for Designers and Developers in Building AI Algorithms to Fight Fake News. Sci Eng Ethics 2023; 29:30. [PMID: 37555995 DOI: 10.1007/s11948-023-00450-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 07/06/2023] [Indexed: 08/10/2023]
Abstract
This article suggests several design principles intended to assist in the development of ethical algorithms exemplified by the task of fighting fake news. Although numerous algorithmic solutions have been proposed, fake news still remains a wicked socio-technical problem that begs not only engineering but also ethical considerations. We suggest employing insights from ethics of care while maintaining its speculative stance to ask how algorithms and design processes would be different if they generated care and fight fake news. After reviewing the major characteristics of ethics of care and the phases of care, we offer four algorithmic design principles. The first principle highlights the need to develop a strategy to deal with fake news on the part of the software designers. The second principle calls for the involvement of various stakeholders in the design processes in order to increase the chances of successfully fighting fake news. The third principle suggests allowing end-users to report on fake news. Finally, the last principle proposes keeping the end-user updated on the treatment in the suspected news items. Implementing these principles as care practices can render the developmental process more ethically oriented as well as improve the ability to fight fake news.
Collapse
Affiliation(s)
- Galit Wellner
- The Interdisciplinary Program in Humanities, Tel Aviv University, Tel Aviv, Israel.
- School of Multi-Disciplinary Studies, Holon Institute of Technology (HIT), Holon, Israel.
| | | |
Collapse
|
21
|
Kotiyal B, Pathak H, Singh N. Debunking multi-lingual social media posts using deep learning. Int J Inf Technol 2023:1-13. [PMID: 37360313 PMCID: PMC10239612 DOI: 10.1007/s41870-023-01288-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 05/07/2023] [Indexed: 06/28/2023]
Abstract
Fake news on social media has become a growing concern due to its potential impact on shaping public opinion. The proposed Debunking Multi-Lingual Social Media Posts using Deep Learning (DSMPD) approach offers a promising solution to detect fake news. The DSMPD approach involves creating a dataset of English and Hindi social media posts using web scraping and Natural Language Processing (NLP) techniques. This dataset is then used to train, test, and validate a deep learning-based model that extracts various features, including Embedding from Language Models (ELMo), word and n-gram counts, Term Frequency-Inverse Document Frequency (TF-IDF), sentiments, polarity, and Named Entity Recognition (NER). Based on these features, the model classifies news items into five categories: real, could be real, could be fabricated, fabricated, or dangerously fabricated. To evaluate the performance of the classifiers, the researchers used two datasets comprising over 45,000 articles. Machine learning (ML) algorithms and Deep learning (DL) model are compared to choose the best option for classification and prediction.
Collapse
Affiliation(s)
- Bina Kotiyal
- Department of Computer Science, Gurukula Kangri (Deemed to be University), Haridwar, Uttarakhand India
| | - Heman Pathak
- Department of Computer Science, Gurukula Kangri (Deemed to be University), Haridwar, Uttarakhand India
| | - Nipur Singh
- Department of Computer Science, Gurukula Kangri (Deemed to be University), Haridwar, Uttarakhand India
| |
Collapse
|
22
|
Forti LR, Travassos MLDO, Coronel-Bejarano D, Miranda DF, Souza D, Sabino J, Szabo JK. Posts Supporting Anti-Environmental Policy in Brazil are Shared More on Social Media. Environ Manage 2023; 71:1188-1198. [PMID: 36443526 DOI: 10.1007/s00267-022-01757-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 11/14/2022] [Indexed: 05/15/2023]
Abstract
Weakening environmental laws supported by disinformation are currently of concern in Brazil. An example of disinformation is the case of the "firefighter cattle". Supporters of this idea believe that by consuming organic mass, cattle decrease the risk of fire in natural ecosystems. This statement was cited by a member of the Bolsonaro government in response to the unprecedented 2020 fires in the Pantanal, as well as in support of a new law that enables extensive livestock in protected areas of this biome. By suggesting that grazing benefits the ecosystem, the "firefighter cattle" argument supports the interests of agribusiness. However, it ignores the real costs of livestock production on biodiversity. We analysed the social repercussion of the "firefighter cattle" by analysing public reactions to YouTube, Facebook, and Google News posts. These videos and articles and the responses to them either agreed or disagreed with the "firefighter cattle". Supportive posts were shared more on social media and triggered more interactions than critical posts. Even though many netizens disagreed with the idea of "firefighter cattle", it has gone viral, and was used as a tool to strengthen anti-environmental policies. We advocate that government institutions should use resources and guidelines provided by the scientific community to raise awareness. These materials include international reports produced by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) and the Intergovernmental Panel on Climate Change (IPCC). We need to curb pseudoscience and misinformation in political discourse, avoiding misconceptions that threaten natural resources and confuse global society.
Collapse
Affiliation(s)
- Lucas Rodriguez Forti
- Instituto de Biologia, Universidade Federal da Bahia, Rua Barão de Jeremoabo, 668 - Campus de Ondina, CEP: 40170-115, Salvador, Bahia, Brazil.
- Programa de Pós-Graduação em Ecologia: Teoria, Aplicações e Valores, Instituto de Biologia, Universidade Federal da Bahia, Rua Barão de Jeremoabo, 668 - Campus de Ondina, CEP: 40170-115, Salvador, Bahia, Brazil.
- Departamento de Biociências, Universidade Federal Rural do Semi-Árido, Av. Francisco Mota, 572 - Costa e Silva, 59625-900, Mossoró, Rio Grande do Norte, Brazil.
| | - Magno Lima de Oliveira Travassos
- Programa de Pós-Graduação em Ecologia: Teoria, Aplicações e Valores, Instituto de Biologia, Universidade Federal da Bahia, Rua Barão de Jeremoabo, 668 - Campus de Ondina, CEP: 40170-115, Salvador, Bahia, Brazil
- Pós-Graduação em Conservação e Manejo da Biodiversidade, Universidade Católica do Salvador, Av. Prof. Pinto de Aguiar, 2589 - Pituaçu, CEP: 41740-090, Salvador, Bahia, Brazil
| | - Diana Coronel-Bejarano
- Programa de Pós-Graduação em Ecologia: Teoria, Aplicações e Valores, Instituto de Biologia, Universidade Federal da Bahia, Rua Barão de Jeremoabo, 668 - Campus de Ondina, CEP: 40170-115, Salvador, Bahia, Brazil
| | - Diego Fernandes Miranda
- Programa de Pós-Graduação em Ecologia: Teoria, Aplicações e Valores, Instituto de Biologia, Universidade Federal da Bahia, Rua Barão de Jeremoabo, 668 - Campus de Ondina, CEP: 40170-115, Salvador, Bahia, Brazil
| | - David Souza
- Programa de Pós-Graduação em Ecologia: Teoria, Aplicações e Valores, Instituto de Biologia, Universidade Federal da Bahia, Rua Barão de Jeremoabo, 668 - Campus de Ondina, CEP: 40170-115, Salvador, Bahia, Brazil
| | - José Sabino
- Brazilian Platform for Biodiversity and Ecosystem Services - BPBES, Campinas, São Paulo, Brazil
| | - Judit K Szabo
- Instituto de Biologia, Universidade Federal da Bahia, Rua Barão de Jeremoabo, 668 - Campus de Ondina, CEP: 40170-115, Salvador, Bahia, Brazil
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
23
|
Mohawesh R, Maqsood S, Althebyan Q. Multilingual deep learning framework for fake news detection using capsule neural network. J Intell Inf Syst 2023; 60:1-17. [PMID: 37363074 PMCID: PMC10169214 DOI: 10.1007/s10844-023-00788-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 03/30/2023] [Accepted: 03/31/2023] [Indexed: 06/28/2023]
Abstract
Fake news detection is an essential task; however, the complexity of several languages makes fake news detection challenging. It requires drawing many conclusions about the numerous people involved to comprehend the logic behind some fake stories. Existing works cannot collect more semantic and contextual characteristics from documents in a particular multilingual text corpus. To bridge these challenges and deal with multilingual fake news detection, we present a semantic approach to the identification of fake news based on relational variables like sentiment, entities, or facts that may be directly derived from the text. Our model outperformed the state-of-the-art methods by approximately 3.97% for English to English, 1.41% for English to Hindi, 5.47% for English to Indonesian, 2.18% for English to Swahili, and 2.88% for English to Vietnamese language reviews on TALLIP fake news dataset. To the best of our knowledge, our paper is the first study that uses a capsule neural network for multilingual fake news detection.
Collapse
Affiliation(s)
- Rami Mohawesh
- Cybersecurity Department, College of Engineering, Al Ain University, Al Ain - Abu Dhabi, UAE
| | - Sumbal Maqsood
- School of Information Technology, University of Tasmania, Hobart, Tasmania Australia
| | - Qutaibah Althebyan
- Cybersecurity Department, College of Engineering, Al Ain University, Al Ain - Abu Dhabi, UAE
- Software Engineering Department, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
24
|
Phan HT, Nguyen NT, Hwang D. Fake news detection: A survey of graph neural network methods. Appl Soft Comput 2023; 139:110235. [PMID: 36999094 PMCID: PMC10036155 DOI: 10.1016/j.asoc.2023.110235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/03/2022] [Accepted: 03/19/2023] [Indexed: 04/01/2023]
Abstract
The emergence of various social networks has generated vast volumes of data. Efficient methods for capturing, distinguishing, and filtering real and fake news are becoming increasingly important, especially after the outbreak of the COVID-19 pandemic. This study conducts a multiaspect and systematic review of the current state and challenges of graph neural networks (GNNs) for fake news detection systems and outlines a comprehensive approach to implementing fake news detection systems using GNNs. Furthermore, advanced GNN-based techniques for implementing pragmatic fake news detection systems are discussed from multiple perspectives. First, we introduce the background and overview related to fake news, fake news detection, and GNNs. Second, we provide a GNN taxonomy-based fake news detection taxonomy and review and highlight models in categories. Subsequently, we compare critical ideas, advantages, and disadvantages of the methods in categories. Next, we discuss the possible challenges of fake news detection and GNNs. Finally, we present several open issues in this area and discuss potential directions for future research. We believe that this review can be utilized by systems practitioners and newcomers in surmounting current impediments and navigating future situations by deploying a fake news detection system using GNNs.
Collapse
Affiliation(s)
- Huyen Trang Phan
- Department of Computer Engineering, Yeungnam University, Gyeongsan, South Korea
- Faculty of Information Technology, Nguyen Tat Thanh University, Ho Chi Minh, Vietnam
| | - Ngoc Thanh Nguyen
- Department of Applied Informatics, Wroclaw University of Science and Technology, Wroclaw, Poland
| | - Dosam Hwang
- Department of Computer Engineering, Yeungnam University, Gyeongsan, South Korea
| |
Collapse
|
25
|
Rędzio AM, Izydorczak K, Muniak P, Kulesza W, Doliński D. Is the COVID-19 bad news game good news? Testing whether creating and disseminating fake news about vaccines in a computer game reduces people's belief in anti-vaccine arguments. Acta Psychol (Amst) 2023; 236:103930. [PMID: 37146384 PMCID: PMC10150198 DOI: 10.1016/j.actpsy.2023.103930] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/26/2023] [Accepted: 04/28/2023] [Indexed: 05/07/2023] Open
Abstract
Improving vaccination eagerness is crucial, especially during the ongoing COVID-19 pandemic and establishing new procedures to achieve that goal is highly important. Previous research (Roozenbeek & van der Linden, 2019a, 2019b) has indicated that playing the "Bad News" game, in which a player spreads fake news to gain followers, reduces people's belief in fake news. The goal of the present paper was to test an analogous new game called "COVID-19 Bad News (CBN)" to improve one's eagerness to vaccinate against coronavirus. CBN was constructed to examine whether creating and disseminating fake news focused on vaccinations and the COVID-19 pandemic has a similar effect and improves people's attitudes toward vaccination. Two experiments were conducted where participants played CBN or Tetris and afterwards evaluated the credibility of statements about vaccines against COVID-19 and finally filled out a questionnaire concerning their attitudes toward vaccination. The results show that playing CBN does not reduce evaluations of the credibility of all statements that are unfavorable to vaccines (false as well as true). Additionally, it does not enhance readiness to vaccinate. Future research and limitations are discussed.
Collapse
Affiliation(s)
- Anna Magdalena Rędzio
- Department of Psychology Warsaw, SWPS University of Social Sciences and Humanities, Poland.
| | - Kamil Izydorczak
- Department of Psychology Wrocław, SWPS University of Social Sciences and Humanities, Poland.
| | - Paweł Muniak
- Department of Psychology Warsaw, SWPS University of Social Sciences and Humanities, Poland.
| | - Wojciech Kulesza
- Department of Psychology Warsaw, SWPS University of Social Sciences and Humanities, Poland.
| | - Dariusz Doliński
- Department of Psychology Wrocław, SWPS University of Social Sciences and Humanities, Poland.
| |
Collapse
|
26
|
Vasist PN, Chatterjee D, Krishnan S. The Polarizing Impact of Political Disinformation and Hate Speech: A Cross-country Configural Narrative. Inf Syst Front 2023:1-26. [PMID: 37361884 PMCID: PMC10106894 DOI: 10.1007/s10796-023-10390-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 06/28/2023]
Abstract
Information and communication technologies hold immense potential to enhance our lives and societal well-being. However, digital spaces have also emerged as a fertile ground for fake news campaigns and hate speech, aggravating polarization and posing a threat to societal harmony. Despite the fact that this dark side is acknowledged in the literature, the complexity of polarization as a phenomenon coupled with the socio-technical nature of fake news necessitates a novel approach to unravel its intricacies. In light of this sophistication, the current study employs complexity theory and a configurational approach to investigate the impact of diverse disinformation campaigns and hate speech in polarizing societies across 177 countries through a cross-country investigation. The results demonstrate the definitive role of disinformation and hate speech in polarizing societies. The findings also offer a balanced perspective on internet censorship and social media monitoring as necessary evils to combat the disinformation menace and control polarization, but suggest that such efforts may lend support to a milieu of hate speech that fuels polarization. Implications for theory and practice are discussed.
Collapse
Affiliation(s)
| | - Debashis Chatterjee
- Organizational Behavior and Human Resources Area, Indian Institute of Management Kozhikode, Kozhikode, Kerala India
| | - Satish Krishnan
- Information Systems Area, Indian Institute of Management Kozhikode, Kozhikode, Kerala India
| |
Collapse
|
27
|
Rizk R, Rizk D, Rizk F, Hsu S. 280 characters to the White House: predicting 2020 U.S. presidential elections from twitter data. Comput Math Organ Theory 2023:1-28. [PMID: 37360912 PMCID: PMC10042672 DOI: 10.1007/s10588-023-09376-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 06/28/2023]
Abstract
This nation-shaping election of 2020 plays a vital role in shaping the future of the U.S. and the entire world. With the growing importance of social media, the public uses them to express their thoughts and communicate with others. Social media have been used for political campaigns and election activities, especially Twitter. The researchers intend to predict presidential election results by analyzing the public stance toward the candidates using Twitter data. Previous researchers have not succeeded in finding a model that simulates well the U.S. presidential election system. This manuscript proposes an efficient model that predicts the 2020 U.S. presidential election from geo-located tweets by leveraging the sentiment analysis potential, multinomial naive Bayes classifier, and machine learning. An extensive study is performed for all 50 states to predict the 2020 U.S. presidential election results led by the state-based public stance for electoral votes. The general public stance is also predicted for popular votes. The true public stance is preserved by eliminating all outliers and removing suspicious tweets generated by bots and agents recruited for manipulating the election. The pre-election and post-election public stances are also studied with their time and space variations. The influencers' effect on the public stance was discussed. Network analysis and community detection techniques were performed to detect any hidden patterns. An algorithm-defined stance meter decision rule was introduced to predict Joe Biden as the President-elect. The model's effectiveness in predicting the election results for each state was validated by the comparison of the predicted results with the actual election results. With a percentage of 89.9%, the proposed model showed that Joe Biden dominated the electoral college and became the winner of the U.S. presidential election in 2020.
Collapse
Affiliation(s)
- Rodrigue Rizk
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069 USA
| | - Dominick Rizk
- Center for Advanced Computer Studies , University of Louisiana at Lafayette, Lafayette, LA 70504 USA
| | - Frederic Rizk
- Center for Advanced Computer Studies , University of Louisiana at Lafayette, Lafayette, LA 70504 USA
| | - Sonya Hsu
- Center for Advanced Computer Studies , University of Louisiana at Lafayette, Lafayette, LA 70504 USA
| |
Collapse
|
28
|
Balshetwar SV, RS A, R DJ. Fake news detection in social media based on sentiment analysis using classifier techniques. Multimed Tools Appl 2023; 82:1-31. [PMID: 37362674 PMCID: PMC10006567 DOI: 10.1007/s11042-023-14883-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 11/22/2022] [Accepted: 02/06/2023] [Indexed: 06/28/2023]
Abstract
Fake news on social media, has spread for personal or societal gain. Detecting fake news is a multi-step procedure that entails analysing the content of the news to assess its trustworthiness. The article has proposed a new solution for fake news detection which incorporates sentiment as an important feature to improve the accuracy with two different data sets of ISOT and LIAR. The key feature words with content's propensity scores of the opinions are developed based on sentiment analysis using a lexicon-based scoring algorithm. Further, the study proposed a multiple imputation strategy which integrated Multiple Imputation Chain Equation (MICE) to handle multivariate missing variables in social media or news data from the collected dataset. Consequently, to extract the effective features from the text, Term Frequency and Inverse Document Frequency (TF-IDF) are introduced to determine the long-term features with the weighted matrix. The correlation of missing data variables and useful data features are classified based on Naïve Bayes, passive-aggressive and Deep Neural Network (DNN) classifiers. The findings of this research described that the overall calculation of the proposed method was obtained with an accuracy of 99.8% for the detection of fake news with the evaluation of various statements such as barely true, half true, true, mostly true and false from the dataset. Finally, the performance of the proposed method is compared with the existing methods in which the proposed method results in better efficiency.
Collapse
Affiliation(s)
- Sarita V Balshetwar
- YSPM’S, YTC, Faculty of Engineering, Satara, Maharashtra India
- DBATU, lonere, Raigad, Maharashtra India
| | - Abilash RS
- Bethlahem Institute of Engineering, Ulaganvillai, Tamil Nadu India
| | - Dani Jermisha R
- Arunachala College of Engineering For Women, Manavilai, Tamil Nadu India
| |
Collapse
|
29
|
Petratos PN, Faccia A. Fake news, misinformation, disinformation and supply chain risks and disruptions: risk management and resilience using blockchain. Ann Oper Res 2023; 327:1-28. [PMID: 37361081 PMCID: PMC9994786 DOI: 10.1007/s10479-023-05242-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 02/16/2023] [Indexed: 06/28/2023]
Abstract
Fake news, misinformation and disinformation have significantly increased over the past years, and they have a profound effect on societies and supply chains. This paper examines the relationship of information risks with supply chain disruptions and proposes blockchain applications and strategies to mitigate and manage them. We critically review the literature of SCRM and SCRES and find that information flows and risks are relatively attracting less attention. We contribute by suggesting that information integrates other flows, processes and operations, and it is an overarching theme that is essential in every part of the supply chain. Based on related studies we create a theoretical framework that incorporates fake news, misinformation and disinformation. To our knowledge, this is a first attempt to combine types of misleading information and SCRM/SCRES. We find that fake news, misinformation and disinformation can be amplified and cause larger supply chain disruptions, especially when they are exogenous and intentional. Finally, we present both theoretical and practical applications of blockchain technology to supply chain and find support that blockchain can actually advance risk management and resilience of supply chains. Cooperation and information sharing are effective strategies.
Collapse
|
30
|
Mallick C, Mishra S, Senapati MR. A cooperative deep learning model for fake news detection in online social networks. J Ambient Intell Humaniz Comput 2023; 14:4451-4460. [PMID: 36992904 PMCID: PMC9971668 DOI: 10.1007/s12652-023-04562-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 02/08/2023] [Indexed: 06/19/2023]
Abstract
Fake news, which considers and modifies facts for virality objectives, causes a lot of havoc on social media. It spreads faster than real news and produces a slew of issues, including disinformation, misunderstanding, and misdirection in the minds of readers. To combat the spread of fake news, detection algorithms are used, which examine news articles through temporal language processing. The lack of human engagement during fake news detection is the main problem with these systems. To address this problem, this paper presents a cooperative deep learning-based fake news detection model.The suggested technique uses user feedbacks to estimate news trust levels, and news ranking is determined based on these values. Lower-ranked news is preserved for language processing to ensure its validity, while higher-ranked content is recognized as genuine news. A convolutional neural network (CNN) is utilized to turn user feedback into rankings in the deep learning layer. Negatively rated news is sent back into the system to train the CNN model. The suggested model is found to have a 98% accuracy rate for detecting fake news, which is greater than most existing language processing based models.The suggested deep learning cooperative model is also compared to state-of-the-art methods in terms of precision, recall, F-measure, and area under the curve (AUC). Based on this analysis, the suggested model is found to be highly efficient.
Collapse
Affiliation(s)
- Chandrakant Mallick
- Department of Computer Science and Engineering, Biju Patnaik University of Technology, Rourkela, Odisha 769015 India
| | - Sarojananda Mishra
- Department of Computer Science and Engineering, Indira Gandhi Institute of Technology, Sarang, Odisha 759146 India
| | - Manas Ranjan Senapati
- Department of Information Technology, Veer Surendra Sai University of Technology, Burla, Odisha 768018 India
| |
Collapse
|
31
|
Omar B, Apuke OD, Nor ZM. The intrinsic and extrinsic factors predicting fake news sharing among social media users: the moderating role of fake news awareness. Curr Psychol 2023; 43:1-13. [PMID: 36845207 PMCID: PMC9942062 DOI: 10.1007/s12144-023-04343-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2023] [Indexed: 02/25/2023]
Abstract
Research on fake news is growing, yet the relative influence of different factors on fake news sharing and how it can be reduced are still understudied. To fill this gap, this study treats user motivation and online environment as intrinsic and extrinsic factors and examines the role of fake news awareness as a prevention against the spread of fake news. This study describes the results of a Malaysian sample (N = 451) to determine the effects of intrinsic factor (altruism, information sharing, socialization and status seeking) and extrinsic factor (trust in network, homophily, norm of reciprocity and tie strength) on fake news sharing using Partial Least Square (PLS). Unlike past research, we treated the two main factors as higher order-constructs. Our findings revealed a stronger appeal of online environment than user motivation in determining fake news sharing among social media users in Malaysia. We also found that high fake news awareness determined low fake news sharing. This result suggests the importance of fake news awareness as an intervention strategy to curtail the spread of fake news. Future research is needed to build upon our findings to be tested at cross-cultural settings and also employ time series analysis to better understand the effect of increasing awareness of fake news over time.
Collapse
Affiliation(s)
- Bahiyah Omar
- School of Communication, Universiti Sains Malaysia, Penang, 11800 USM Malaysia
| | - Oberiri Destiny Apuke
- Department of Mass Communication, Taraba State University, PMB 1167, Jalingo, Nigeria
| | - Zarina Md Nor
- School of Distance Education, Universiti Sains Malaysia, Penang, 11800 USM Malaysia
| |
Collapse
|
32
|
Balcaen P, Buts C, Bois CD, Tkacheva O. The effect of disinformation about COVID-19 on consumer confidence: Insights from a survey experiment. J Behav Exp Econ 2023; 102:101968. [PMID: 36531665 PMCID: PMC9733969 DOI: 10.1016/j.socec.2022.101968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 11/26/2022] [Accepted: 12/07/2022] [Indexed: 06/17/2023]
Abstract
Although the COVID-19 pandemic was accompanied by an infodemic about the origin of the virus and effectiveness of vaccines, little is known about the causal effect of this disinformation on the economy. This article fills in this void by examining the effects of disinformation about COVID-19 vaccines on consumer confidence by means of an original survey experiment in Dutch speaking communities of Belgium. Our findings show that the information set that impacts consumer confidence is much broader than previously assumed. We show that disinformation changes the perception of the effectiveness of vaccines which in turn indirectly impacts the future economic outlook, measured by the metric consumer confidence. Moreover, we find that the above effects are larger for respondents exposed to disinformation that is framed as containing 'scientific evidence' compared to 'conspiracy frames'.
Collapse
Affiliation(s)
- Pieter Balcaen
- The Royal Military Academy Hobbemastraat 184, 1000 Brussels, Belgium
| | - Caroline Buts
- The Vrije Universiteit Brussel Pleinlaan 2, 1050 Elsene, Belgium
| | - Cind Du Bois
- The Royal Military Academy Hobbemastraat 184, 1000 Brussels, Belgium
| | - Olesya Tkacheva
- Prof. Dr. Olesya Tkacheva is Assistant Professor at the Brussels School of Governance Pleinlaan 2, 1050 Elsene, Belgium
| |
Collapse
|
33
|
Ahuja N, Kumar S. Mul-FaD: attention based detection of multiLingual fake news. J Ambient Intell Humaniz Comput 2023; 14:2481-2491. [PMID: 36684482 PMCID: PMC9839960 DOI: 10.1007/s12652-022-04499-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
The latest buzzword in today's world is fake news. The circulation of false information influences elections, public health, brand reputations, and violence. Hence, the severity of the threat of fake news is increasing. The danger for fake news exists everywhere globally and is not specific to one language or nation. The creators of fake news layer the facts in the news with misinformation to confuse the readers. Hence, a need arises for creating a model for detecting fake news in multiple languages. This paper proposes a unified attention-based model Mul-FaD to detect fake news in various languages. We have created our dataset with around 40000 articles in English, German, and French. This paper also shows an exploratory analysis of the dataset created. In this paper, we perform experiments from a multilingual perspective in which we use an altered hierarchical attention-based network to detect fake news. Our model is able to achieve an accuracy of 93.73 and an F1 score of 92.9 for the combined corpus of the three languages.
Collapse
Affiliation(s)
- Nishtha Ahuja
- Department of Computer Science and Engineering, Delhi Technological University, Delhi, India
| | - Shailender Kumar
- Department of Computer Science and Engineering, Delhi Technological University, Delhi, India
| |
Collapse
|
34
|
Aïmeur E, Amri S, Brassard G. Fake news, disinformation and misinformation in social media: a review. Soc Netw Anal Min 2023; 13:30. [PMID: 36789378 PMCID: PMC9910783 DOI: 10.1007/s13278-023-01028-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/07/2023] [Accepted: 01/12/2023] [Indexed: 02/12/2023]
Abstract
Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.
Collapse
Affiliation(s)
- Esma Aïmeur
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| | - Sabrine Amri
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| | - Gilles Brassard
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| |
Collapse
|
35
|
Ma K, Tang C, Zhang W, Cui B, Ji K, Chen Z, Abraham A. DC-CNN: Dual-channel Convolutional Neural Networks with attention-pooling for fake news detection. APPL INTELL 2023; 53:8354-8369. [PMID: 35937201 PMCID: PMC9340725 DOI: 10.1007/s10489-022-03910-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/18/2022] [Indexed: 11/24/2022]
Abstract
Fake news detection mainly relies on the extraction of article content features with neural networks. However, it has brought some challenges to reduce the noisy data and redundant features, and learn the long-distance dependencies. To solve the above problems, Dual-channel Convolutional Neural Networks with Attention-pooling for Fake News Detection (abbreviated as DC-CNN) is proposed. This model benefits from Skip-Gram and Fasttext. It can effectively reduce noisy data and improve the learning ability of the model for non-derived words. A parallel dual-channel pooling layer was proposed to replace the traditional CNN pooling layer in DC-CNN. The Max-pooling layer, as one of the channels, maintains the advantages in learning local information between adjacent words. The Attention-pooling layer with multi-head attention mechanism serves as another pooling channel to enhance the learning of context semantics and global dependencies. This model benefits from the learning advantages of the two channels and solves the problem that pooling layer is easy to lose local-global feature correlation. This model is tested on two different COVID-19 fake news datasets, and the experimental results show that our model has the optimal performance in dealing with noisy data and balancing the correlation between local features and global features.
Collapse
Affiliation(s)
- Kun Ma
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan, 250022 China
| | - Changhao Tang
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan, 250022 China
| | - Weijuan Zhang
- Department of Computer and Software Engineering, Shandong College of Electronic Technology, Jinan, 250200 China
| | - Benkuan Cui
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan, 250022 China
| | - Ke Ji
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan, 250022 China
| | - Zhenxiang Chen
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan, 250022 China
| | - Ajith Abraham
- Machine Intelligence Research Labs, Scientific Network for Innovation and Research Excellence, Auburn, WA USA
| |
Collapse
|
36
|
Olaleye T, Abayomi-Alli A, Adesemowo K, Arogundade OT, Misra S, Kose U. SCLAVOEM: hyper parameter optimization approach to predictive modelling of COVID-19 infodemic tweets using smote and classifier vote ensemble. Soft comput 2023; 27:3531-50. [PMID: 35309597 DOI: 10.1007/s00500-022-06940-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2022] [Indexed: 11/23/2022]
Abstract
Fake COVID-19 tweets are dangerous since they are misinformative, completely inaccurate, as threatening the efforts for flattening the pandemic curve. Thus, aside the COVID-19 pandemic, dealing with fake news and myths about the virus constitute an infodemic issue, which must be tackled by ensuring only valid information. In this context, this study proposed the Synthetic Minority Over-Sampling Technique (SMOTE) and the classifier vote ensemble (SCLAVOEM) method as a fake news classifier and a hyper parameter optimization approach for predictive modelling of COVID-19 infodemic tweets. Hyper parameter optimization variables were deployed across specific points of the proposed model and a minority oversampling of training sets was applied within imbalanced class representations. Experimental applications by the SCLAVOEM for COVID-19 infodemic prediction returned 0.999 and 1.000 weighted averages for F-measure and area under curve (AUC), respectively. Thanks to the SMOTE, the performance increases of 3.74 and 1.11%; 5.05 and 0.29%; 4.59 and 8.05% was seen in three different data sets. Eventually, the SCLAVOEM provided a framework for predictive detecting 'fake tweets' and three classifiers: 'positive', 'negative' and 'click-trap' (piège à clics). It is thought that the model will automatically flag fake information on Twitter, hence protecting the public from inaccurate and information overload.
Collapse
|
37
|
Varshney D, Vishwakarma DK. An automated multi-web platform voting framework to predict misleading information proliferated during COVID-19 outbreak using ensemble method. DATA KNOWL ENG 2023; 143:102103. [PMID: 36406205 PMCID: PMC9650682 DOI: 10.1016/j.datak.2022.102103] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 09/21/2022] [Accepted: 11/05/2022] [Indexed: 11/13/2022]
Abstract
The spreading of misleading information on social web platforms has fuelled massive panic and confusion among the public regarding the Corona disease, the detection of which is of paramount importance. Previous studies mainly relied on a specific web platform to collect crucial evidence to detect fake content. The analysis identifies that retrieving clues from two or more different sources/web platforms gives more reliable prediction and confidence concerning a specific claim. This study proposed a novel multi-web platform voting framework that incorporates 4 sets of novel features: content, linguistic, similarity, and sentiments. The features have been gathered from each web-platforms to validate the news. To validate the fact/claim, a unique source platform is designed to collect relevant clues/headlines from two web platforms (YouTube, Google) based on specific queries and extracted features concerning each clue/headline. The proposed idea is to incorporate a unique platform to assist researchers in gathering relevant and vital evidence from diverse web platforms. After evaluation and validation, it has been identified that the built model is quite intelligent, gives promising results, and effectively predicts misleading information. The model correctly detected about 98% of the COVID misinformation on the constraint Covid-19 fake news dataset. Furthermore, it is observed that it is efficient to gather clues from multiple web platforms for more reliable predictions to validate the news. The suggested work depicts numerous practical applications for health policy-makers and practitioners that could be useful in safeguarding and implicating awareness among society from misleading information dissemination during this pandemic.
Collapse
|
38
|
Hossain MA, Chowdhury MMH, Pappas IO, Metri B, Hughes L, Dwivedi YK. Fake news on Facebook and their impact on supply chain disruption during COVID-19. Ann Oper Res 2022; 327:1-29. [PMID: 36570556 PMCID: PMC9761633 DOI: 10.1007/s10479-022-05124-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/01/2022] [Indexed: 06/17/2023]
Abstract
Social media (SM) fake news has become a serious concern especially during COVID-19. In this study, we develop a research model to investigate to what extent SM fake news contributes to supply chain disruption (SCD), and what are the different SM affordances that contribute to SM fake news. To test the derived hypotheses with survey data, we have applied partial least square based structural equation modelling (PLS-SEM) technique. Further, to identify how different configurations of SC resilience (SCR) capabilities reduce SCD, we have used fuzzy set qualitative comparative analysis (fsQCA). The results show that SM affordances lead to fake news, which increases consumer panic buying (CPB); CPB in turn increases SCD. In addition, SM fake news directly increases SCD. The moderation test suggests that, SCR capability, as a higher-order construct, decreases the effect of CPB on SCD; however, neither of the capabilities individually moderates. Complimentarily, the fsQCA results suggest that no single capability but their three specific configurations reduce SCD. This work offers a new theoretical perspective to study SCD through SM fake news. Our research advances the knowledge of SCR from a configurational lens by adopting an equifinal means towards mitigating disruption. This research will also assist the operations and SC managers to strategize and understand which combination of resilience capabilities is the most effective in tackling disruptions during a crisis e.g., COVID-19. In addition, by identifying the relative role of different SM affordances, this study provides pragmatic insights into SM affordance measures that combat fake news on SM.
Collapse
Affiliation(s)
- Mohammad Alamgir Hossain
- School of Accounting, Information Systems, and Supply Chain, RMIT University, Melbourne, VIC 3000 Australia
- RMIT Business and Human Rights (BHRIGHT) Centre, RMIT University, Melbourne, VIC 3000 Australia
| | | | - Ilias O. Pappas
- University of Agder, Kristiansand, Norway
- Norwegian University of Science and Technology, Trondheim, Norway
| | | | - Laurie Hughes
- Digital Futures for Sustainable Business & Society Group, School of Management, Swansea University, Bay Campus, Swansea, UK
| | - Yogesh K. Dwivedi
- Digital Futures for Sustainable Business & Society Group, School of Management, Swansea University, Bay Campus, Fabian Bay, Swansea, SA1 8EN Wales UK
- Department of Management, Symbiosis Institute of Business Management, Pune & Symbiosis International (Deemed University), Pune, Maharashtra India
| |
Collapse
|
39
|
Caliskan C, Kilicaslan A. Varieties of corona news: a cross-national study on the foundations of online misinformation production during the COVID-19 pandemic. J Comput Soc Sci 2022; 6:191-243. [PMID: 36530785 PMCID: PMC9746594 DOI: 10.1007/s42001-022-00193-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 11/11/2022] [Indexed: 05/05/2023]
Abstract
Misinformation in the media is produced by hard-to-gauge thought mechanisms employed by individuals or collectivities. In this paper, we shed light on what the country-specific factors of falsehood production in the context of COVID-19 Pandemic might be. Collecting our evidence from the largest misinformation dataset used in the COVID-19 misinformation literature with close to 11,000 pieces of falsehood, we explore patterns of misinformation production by employing a variety of methodological tools including algorithms for text similarity, clustering, network distances, and other statistical tools. Covering news produced in a span of more than 14 months, our paper also differentiates itself by its use of carefully controlled hand-labeling of topics of falsehood. Findings suggest that country-level factors do not provide the strongest support for predicting outcomes of falsehood, except for one phenomenon: in countries with serious press freedom problems and low human development, the mostly unknown authors of misinformation tend to focus on similar content. In addition, the intensity of discussion on animals, predictions and symptoms as part of fake news is the biggest differentiator between nations; whereas news on conspiracies, medical equipment and risk factors offer the least explanation to differentiate. Based on those findings, we discuss some distinct public health and communication strategies to dispel misinformation in countries with particular characteristics. We also emphasize that a global action plan against misinformation is needed given the highly globalized nature of the online media environment. Supplementary Information The online version contains supplementary material available at 10.1007/s42001-022-00193-5.
Collapse
Affiliation(s)
- Cantay Caliskan
- Goergen Institute for Data Science, University of Rochester, Rochester, USA
| | - Alaz Kilicaslan
- Department of Sociology, Criminology and Anthropology, University of Wisconsin-Whitewater, Whitewater, USA
| |
Collapse
|
40
|
Obeidat R, Gharaibeh M, Abdullah M, Alharahsheh Y. Multi-label multi-class COVID-19 Arabic Twitter dataset with fine-grained misinformation and situational information annotations. PeerJ Comput Sci 2022; 8:e1151. [PMID: 36532803 PMCID: PMC9748819 DOI: 10.7717/peerj-cs.1151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 10/17/2022] [Indexed: 06/17/2023]
Abstract
Since the inception of the current COVID-19 pandemic, related misleading information has spread at a remarkable rate on social media, leading to serious implications for individuals and societies. Although COVID-19 looks to be ending for most places after the sharp shock of Omicron, severe new variants can emerge and cause new waves, especially if the variants can evade the insufficient immunity provided by prior infection and incomplete vaccination. Fighting the fake news that promotes vaccine hesitancy, for instance, is crucial for the success of the global vaccination programs and thus achieving herd immunity. To combat the proliferation of COVID-19-related misinformation, considerable research efforts have been and are still being dedicated to building and sharing COVID-19 misinformation detection datasets and models for Arabic and other languages. However, most of these datasets provide binary (true/false) misinformation classifications. Besides, the few studies that support multi-class misinformation classification deal with a small set of misinformation classes or mix them with situational information classes. False news stories about COVID-19 are not equal; some tend to have more sinister effects than others (e.g., fake cures and false vaccine info). This suggests that identifying the sub-type of misinformation is critical for choosing the suitable action based on their level of seriousness, ranging from assigning warning labels to the susceptible post to removing the misleading post instantly. We develop comprehensive annotation guidelines in this work that define 19 fine-grained misinformation classes. Then, we release the first Arabic COVID-19-related misinformation dataset comprising about 6.7K tweets with multi-class and multi-label misinformation annotations. In addition, we release a version of the dataset to be the first Twitter Arabic dataset annotated exclusively with six different situational information classes. Identifying situational information (e.g., caution, help-seeking) helps authorities or individuals understand the situation during emergencies. To confirm the validity of the collected data, we define three classification tasks and experiment with various machine learning and transformer-based classifiers to offer baseline results for future research. The experimental results indicate the quality and validity of the data and its suitability for constructing misinformation and situational information classification models. The results also demonstrate the superiority of AraBERT-COV19, a transformer-based model pretrained on COVID-19-related tweets, with micro-averaged F-scores of 81.6% and 78.8% for the multi-class misinformation and situational information classification tasks, respectively. Label Powerset with linear SVC achieved the best performance among the presented methods for multi-label misinformation classification with micro-averaged F-scores of 76.69%.
Collapse
|
41
|
Bello P, Rocco L. Education and COVID-19 excess mortality. Econ Hum Biol 2022; 47:101194. [PMID: 36370500 PMCID: PMC9644421 DOI: 10.1016/j.ehb.2022.101194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 10/14/2022] [Accepted: 10/21/2022] [Indexed: 06/16/2023]
Abstract
We study the role of education during the COVID-19 epidemic in Italy. We compare excess mortality in 2020 and 2021 compared to the pre-pandemic mortality between municipalities with different shares of educated residents. We find that education initially played a strong protective role, which however quickly faded out. After pondering several alternative explanations, we tentatively interpret this finding as the outcome of the interplay between education, information and public health communication, whose availability and coherence varied along the epidemic.
Collapse
Affiliation(s)
- Piera Bello
- University of Bergamo, Italy, and ZEW, Germany.
| | | |
Collapse
|
42
|
Salvi C, Barr N, Dunsmoor JE, Grafman J. Insight Problem Solving Ability Predicts Reduced Susceptibility to Fake News, Bullshit, and Overclaiming. Think Reason 2022; 29:760-784. [PMID: 37982007 PMCID: PMC10655953 DOI: 10.1080/13546783.2022.2146191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 10/28/2022] [Indexed: 11/27/2022]
Abstract
The information humans are exposed to has grown exponentially. This has placed increased demands upon our information selection strategies resulting in reduced fact-checking and critical-thinking time. Prior research shows that problem solving (traditionally measured using the Cognitive Reflection Test-CRT) negatively correlates with believing in false information. We argue that this result is specifically related to insight problem solving. Solutions via insight are the result of parallel processing, characterized by filtering external noise, and, unlike cognitively controlled thinking, it does not suffer from the cognitive overload associated with processing multiple sources of information. We administered the Compound Remote Associate Test (problems used to investigate insight problem solving) as well as the CRT, 20 fake and real news headlines, the bullshit, and overclaiming scales to a sample of 61 participants. Results show that insight problem solving predicts better identification of fake news and bullshit (over and above traditional measures i.e., the CRT), and is associated with reduced overclaiming. These results have implications for understanding individual differences in susceptibility to believing false information.
Collapse
Affiliation(s)
- Carola Salvi
- Department of Psychiatry and Behavioral Sciences, University of Texas at Austin, Austin, TX USA
- Department of Psychology and Social Sciences, John Cabot University, Rome, Italy
| | - Nathaniel Barr
- School of Humanities and Creativity, Sheridan College, ON, Canada
| | - Joseph E. Dunsmoor
- Department of Psychiatry and Behavioral Sciences, University of Texas at Austin, Austin, TX USA
| | - Jordan Grafman
- Shirley Ryan Ability Lab, Chicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| |
Collapse
|
43
|
Coeckelbergh M. Democracy, epistemic agency, and AI: political epistemology in times of artificial intelligence. AI Ethics 2022; 3:1-10. [PMID: 36466152 PMCID: PMC9685050 DOI: 10.1007/s43681-022-00239-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 11/10/2022] [Indexed: 06/17/2023]
Abstract
Democratic theories assume that citizens have some form of political knowledge in order to vote for representatives or to directly engage in democratic deliberation and participation. However, apart from widespread attention to the phenomenon of fake news and misinformation, less attention has been paid to how they are supposed to acquire that knowledge in contexts shaped by artificial intelligence and related digital technologies. While this topic can also be approached from an empirical angle, this paper contributes to supporting concerns about AI and democracy by looking at the issue through the lens of political epistemology, in particular using the concept of epistemic agency. It argues that artificial intelligence (AI) endangers democracy since it risks to diminish the epistemic agency of citizens and thereby undermine the relevant kind of political agency in democracy. It shows that next to fake news and manipulation by means of AI analysis of big data, epistemic bubbles and the defaulting of statistical knowledge endanger the epistemic agency of citizens when they form and wish to revise their political beliefs. AI risks to undermine trust in one's own epistemic capacities and hinder the exercise of those capacities. If we want to protect the knowledge basis of our democracies, we must address these problems in education and technology policy.
Collapse
|
44
|
Rastogi S, Bansal D. A review on fake news detection 3T's: typology, time of detection, taxonomies. Int J Inf Secur 2022; 22:177-212. [PMID: 36406145 PMCID: PMC9664051 DOI: 10.1007/s10207-022-00625-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Fake news has become an industry on its own, where users paid to write fake news and create clickbait content to allure the audience. Apparently, the detection of fake news is a crucial problem and several studies have proposed machine-learning-based techniques to combat fake news. Existing surveys present the review of proposed solutions, while this survey presents several aspects that are required to be considered before designing an effective solution. To this aim, we provide a comprehensive overview of false news detection. The survey presents (1) a clarity to problem definition by explaining different types of false information (like fake news, rumor, clickbait, satire, and hoax) with real-life examples, (2) a list of actors involved in spreading false information, (3) actions taken by service providers, (4) a list of publicly available datasets for fake news in three different formats, i.e., texts, images, and videos, (5) a novel three-phase detection model based on the time of detection, (6) four different taxonomies to classify research based on new-fangled viewpoints in order to provide a succinct roadmap for future, and (7) key bibliometric indicators. In a nutshell, the survey focuses on three key aspects represented as the three T's: Typology of false information, Time of detection, and Taxonomies to classify research. Finally, by reviewing and summarizing several studies on fake news, we outline some potential research directions.
Collapse
Affiliation(s)
- Shubhangi Rastogi
- Punjab Engineering College (Deemed to be University), Chandigarh, India
| | - Divya Bansal
- Punjab Engineering College (Deemed to be University), Chandigarh, India
| |
Collapse
|
45
|
Kar AK, Tripathi SN, Malik N, Gupta S, Sivarajah U. How Does Misinformation and Capricious Opinions Impact the Supply Chain - A Study on the Impacts During the Pandemic. Ann Oper Res 2022; 327:1-22. [PMID: 36407940 PMCID: PMC9640789 DOI: 10.1007/s10479-022-04997-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 07/18/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
Misinformation or fake news has had multifaceted ramifications with the onset of the Covid-19 pandemic, creating widespread panic amongst people. This study investigates the impact of misinformation/ fake news (on internet platforms) on consumer buying behavior, impact of fear (created by fake news) on hoarding of essential products and consumer spending and finally impact of misinformation-induced panic buying on supply chain disruptions. It draws upon the consumer decision theory and the cognitive load theory for explaining the psychological and behavioral responses of consumers. The study follows an inductive approach towards theory building using a multi-method approach. Initially, a qualitative research method based on interviews followed by text-mining has been used followed by analysis using python for topic modelling using Latent Dirichlet Allocation (LDA). The findings revealed several prominent themes like consumer shift to online buying, two contrasting spending intentions namely financial security and compensatory consumptions, irrational panic buying, uncertainty/ambiguity of government protocol and norms, social media fraudulent practices and misinformation dissemination, personalized buying experience, reduced trust on news and marketers, logistics and transportation bottlenecks, labor shortage due to migration and plant closures, and bullwhip effect in supply chains.
Collapse
Affiliation(s)
- Arpan Kumar Kar
- Yardi School of Artificial Intelligence and Department of Management Studies, Indian Institute of Technology Delhi, Hauz Khas, New Delhi, 110016 India
| | - Shalini Nath Tripathi
- Jaipuria Institute of Management, Hahnemann Road, Vineet Khand, Gomti Nagar, 226010 Lucknow, Uttar Pradesh India
| | - Nishtha Malik
- Jaipuria Institute of Management, Hahnemann Road, Vineet Khand, Gomti Nagar, 226010 Lucknow, Uttar Pradesh India
| | - Shivam Gupta
- Department of Information Systems, Supply Chain Management & Decision Support, NEOMA Business School, 59 Rue Pierre Taittinger, 51100 Reims, France
| | | |
Collapse
|
46
|
Rubin A, Brondi S, Pellegrini G. Should I trust or should I go? How people perceive and assess the quality of science communication to avoid fake news. Qual Quant 2022; 57:1-22. [PMID: 36373032 PMCID: PMC9638312 DOI: 10.1007/s11135-022-01569-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/26/2022] [Indexed: 11/07/2022]
Abstract
This paper investigates how citizens of five European countries (Italy, Poland, Portugal, Slovakia, and Spain) enquire about scientific issues, how they rate scientific information on climate change and vaccines in terms of quantity and quality, and their strategies for overcoming perceived defects. We conducted a public consultation involving almost 500 citizens and addressed controversial science-related topics. Discussions were qualitatively content analyzed. The public consultations revealed the prevalence of traditional media as a source of scientific information, and the results presented a general perception of inadequate, imprecise, and insufficient scientific communication. Finally, we show how traditional media are still the most frequently used channels and that personal criteria prevail in the evaluation of the reliability of information sources. Supplementary Information The online version contains supplementary material available at 10.1007/s11135-022-01569-5.
Collapse
Affiliation(s)
| | - Sonia Brondi
- Department of Philosophy and Cultural Heritage, Ca’ Foscari University of Venice, Venice, Italy
| | | |
Collapse
|
47
|
Akhtar P, Ghouri AM, Khan HUR, Amin ul Haq M, Awan U, Zahoor N, Khan Z, Ashraf A. Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Ann Oper Res 2022; 327:1-25. [PMID: 36338350 PMCID: PMC9628472 DOI: 10.1007/s10479-022-05015-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 09/27/2022] [Indexed: 06/16/2023]
Abstract
Fake news and disinformation (FNaD) are increasingly being circulated through various online and social networking platforms, causing widespread disruptions and influencing decision-making perceptions. Despite the growing importance of detecting fake news in politics, relatively limited research efforts have been made to develop artificial intelligence (AI) and machine learning (ML) oriented FNaD detection models suited to minimize supply chain disruptions (SCDs). Using a combination of AI and ML, and case studies based on data collected from Indonesia, Malaysia, and Pakistan, we developed a FNaD detection model aimed at preventing SCDs. This model based on multiple data sources has shown evidence of its effectiveness in managerial decision-making. Our study further contributes to the supply chain and AI-ML literature, provides practical insights, and points to future research directions.
Collapse
Affiliation(s)
- Pervaiz Akhtar
- University of Aberdeen Business School, University of Aberdeen, King’s College, AB24 5UA Aberdeen, UK
- Imperial College London, SW7 2BU London, UK
| | - Arsalan Mujahid Ghouri
- Faculty of Management and Economics, Universiti Pendidikan Sultan Idris, Tanjong Malim, Malaysia
| | - Haseeb Ur Rehman Khan
- Faculty of Art, Computing, and Creative Industry, Universiti Pendidikan Sultan Idris, Tanjong Malim, Malaysia
| | - Mirza Amin ul Haq
- Department of Business Administration, Iqra University, Karachi, Pakistan
| | - Usama Awan
- Department of Business Administration, Inland School of Business and Social Sciences, Inland Norway University of Applied Sciences, Hamar, Norway
| | - Nadia Zahoor
- School of Business and Management, Queen Mary University of London, London, UK
| | - Zaheer Khan
- University of Aberdeen Business School, University of Aberdeen, King’s College, AB24 5UA Aberdeen, UK
- Innolab, University of Vaasa, Vaasa, Finland
| | - Aniqa Ashraf
- CAS-Key Laboratory of Crust-Mantle Materials and the Environments, School of Earth and Space Sciences, University of Science and Technology of China, 230026 Hefei, PR China
| |
Collapse
|
48
|
Musi E, Federico L, Riotta G. Human-computer interaction tools with gameful design for critical thinking the media ecosystem: a classification framework. AI Soc 2022:1-13. [PMID: 36339374 PMCID: PMC9628600 DOI: 10.1007/s00146-022-01583-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 10/18/2022] [Indexed: 11/05/2022]
Abstract
In response to the ever-increasing spread of online disinformation and misinformation, several human-computer interaction tools to enhance data literacy have been developed. Among them, many employ elements of gamification to increase user engagement and reach out to a broader audience. However, there are no systematic criteria to analyze their relevance and impact for building fake news resilience, partly due to the lack of a common understanding of data literacy. In this paper we put forward an operationalizable definition of data literacy as a form of multidimensional critical thinking. We then survey 22 existing tools and classify them according to a framework of 10 criteria pointing to their gameful design and educational features. Through a comparative/contrastive analysis informed by a focus group, we provide a principled set of guidelines to develop more efficient human-computer interaction tools to teach how to critically think in the current media ecosystem.
Collapse
Affiliation(s)
- Elena Musi
- Department of Communication and Media, University of Liverpool, 19 Abercromby Square, Liverpool, L69 7ZG UK
| | - Lorenzo Federico
- Datalab, University Luiss Guido Carli, Viale Pola 12, Rome, Italy
| | - Gianni Riotta
- Datalab, University Luiss Guido Carli, Viale Pola 12, Rome, Italy
| |
Collapse
|
49
|
Schmid S, Hartwig K, Cieslinski R, Reuter C. Digital Resilience in Dealing with Misinformation on Social Media during COVID-19: A Web Application to Assist Users in Crises. Inf Syst Front 2022:1-23. [PMID: 36311478 PMCID: PMC9589652 DOI: 10.1007/s10796-022-10347-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.
Collapse
Affiliation(s)
- Stefka Schmid
- TU Darmstadt, Science and Technology for Peace and Security (PEASEC), Pankratiusstraße 2, 64289 Darmstadt, Germany
| | - Katrin Hartwig
- TU Darmstadt, Science and Technology for Peace and Security (PEASEC), Pankratiusstraße 2, 64289 Darmstadt, Germany
| | - Robert Cieslinski
- TU Darmstadt, Science and Technology for Peace and Security (PEASEC), Pankratiusstraße 2, 64289 Darmstadt, Germany
| | - Christian Reuter
- TU Darmstadt, Science and Technology for Peace and Security (PEASEC), Pankratiusstraße 2, 64289 Darmstadt, Germany
| |
Collapse
|
50
|
Chatterjee S, Chaudhuri R, Vrontis D. Role of fake news and misinformation in supply chain disruption: impact of technology competency as moderator. Ann Oper Res 2022; 327:1-24. [PMID: 36247733 PMCID: PMC9540173 DOI: 10.1007/s10479-022-05001-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 09/19/2022] [Indexed: 06/16/2023]
Abstract
Studies show that COVID-19 has increased the effects of misinformation and fake news that proliferated during the continued crisis and related turbulent environment. Fake news and misinformation can come from various sources such as social media, print media, as well as from electronic media such as instant messaging services and other apps. There is a growing interest among researchers and practitioners on how fake news and misinformation impacts on supply chain disruption. But the limited research in this area leaves a gap. With this background, the purpose of this study is to determine the role of fake news and misinformation in supply chain disruption and the consequences to a firm's operational performance. This study also investigates the moderating role of technology competency in supply chain disruption and operational performance of the firm. With the help of theories and literature, a theoretical model has been developed. Later, the conceptual model has been validated using partial least squares structural equation modeling. The study finds that there is a significant impact of misinformation and fake news on supply chain disruption, which in turn negatively impacts firms' operational performance. The study also highlights that firms' technology competency can improve the supply chain situation that has been disrupted by misinformation and fake news.
Collapse
Affiliation(s)
- Sheshadri Chatterjee
- Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal India
| | - Ranjan Chaudhuri
- Department of Marketing, Indian Institute of Management Ranchi, Ranchi, India
| | | |
Collapse
|