1
|
Ching D, Twomey J, Aylett MP, Quayle M, Linehan C, Murphy G. Can deepfakes manipulate us? Assessing the evidence via a critical scoping review. PLoS One 2025; 20:e0320124. [PMID: 40315197 PMCID: PMC12047760 DOI: 10.1371/journal.pone.0320124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 02/13/2025] [Indexed: 05/04/2025] Open
Abstract
Deepfakes are one of the most recent developments in misinformation technology and are capable of superimposing one person's face onto another in video format. The potential of this technology to defame and cause harm is clear. However, despite the grave concerns expressed about deepfakes, these concerns are rarely accompanied with empirical evidence. We present a scoping review of the existing empirical studies that aim to investigate the effects of viewing deepfakes on people's beliefs, memories, and behaviour. Five databases were searched, producing an initial sample of 2004 papers, from which 22 relevant papers were identified, varying in methodology and research methods used. Overall, we found that the early studies on this topic have often produced inconclusive findings regarding the existence of uniquely persuasive or convincing effects of deepfake exposure. Moreover, many experiments demonstrated poor methodology and did not include a non-deepfake comparator (e.g., text-based misinformation). We conclude that speculation and scare mongering about dystopian uses of deepfake technologies has far outpaced experimental research that assess these harms. We close by offering insights on how to conduct improved empirical work in this area.
Collapse
Affiliation(s)
- Didier Ching
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - John Twomey
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - Matthew P. Aylett
- CereProc Ltd, Edinburgh, United Kingdom
- University of Heriot Watt, Edinburgh, United Kingdom
| | - Michael Quayle
- Lero the Research Ireland Centre for Software, Limerick, Ireland
- Centre for Social Issues Research and Department of Psychology, University of Limerick, Limerick, Ireland
- Department of Psychology, School of Applied Human Sciences, University of KwaZulu-Natal, Pietermaritzburg, South Africa
| | - Conor Linehan
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - Gillian Murphy
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| |
Collapse
|
2
|
Caci B, Giordano G, Alesi M, Gentile A, Agnello C, Lo Presti L, La Cascia M, Ingoglia S, Inguglia C, Volpes A, Monzani D. The public mental representations of deepfake technology: An in-depth qualitative exploration through Quora text data analysis. PLoS One 2024; 19:e0313605. [PMID: 39775334 PMCID: PMC11684586 DOI: 10.1371/journal.pone.0313605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 10/03/2024] [Indexed: 01/11/2025] Open
Abstract
The advent of deepfake technology has raised significant concerns regarding its impact on individuals' cognitive processes and beliefs, considering the pervasive relationships between technology and human cognition. This study delves into the psychological literature surrounding deepfakes, focusing on people's public representation of this emerging technology and highlighting prevailing themes, opinions, and emotions. Under the media framing, the theoretical framework is crucial in shaping individuals' cognitive schemas regarding technology. A qualitative method has been applied to unveil patterns, correlations, and recurring themes of beliefs about the main topic, deepfake, discussed on the forum Quora. The final extracted text corpus consisted of 166 answers to 17 questions. Analysis results highlighted the 20 most prevalent critical lemmas, and deepfake was the main one. Moreover, co-occurrence analysis identified words frequently appearing with the lemma deepfake, including video, create, and artificial intelligence-finally, thematic analysis identified eight main themes within the deepfake corpus. Cognitive processes rely on critical thinking skills in detecting anomalies in fake videos or discerning between the negative and positive impacts of deepfakes from an ethical point of view. Moreover, people adapt their beliefs and mental schemas concerning the representation of technology. Future studies should explore the role of media literacy in helping individuals to identify deepfake content since people may not be familiar with the concept of deepfakes or may not fully understand the negative or positive implications. Increased awareness and understanding of technology can empower individuals to evaluate critically the media related to Artificial Intelligence.
Collapse
Affiliation(s)
- Barbara Caci
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Giulia Giordano
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Marianna Alesi
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Ambra Gentile
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Chiara Agnello
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | | | - Marco La Cascia
- Department of Engineering, University of Palermo, Palermo, Italy
| | - Sonia Ingoglia
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Cristiano Inguglia
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | | | - Dario Monzani
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
- Applied Research Division for Cognitive and Psychological Science, European Institute of Oncology IRCCS, IEO, Milan, Italy
| |
Collapse
|
3
|
Chein JM, Martinez SA, Barone AR. Human intelligence can safeguard against artificial intelligence: individual differences in the discernment of human from AI texts. Sci Rep 2024; 14:25989. [PMID: 39472489 PMCID: PMC11522284 DOI: 10.1038/s41598-024-76218-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 10/11/2024] [Indexed: 11/02/2024] Open
Abstract
Artificial intelligence (AI) models can produce output that closely mimics human-generated content. We examined individual differences in the human ability to differentiate human- from AI-generated texts, exploring relationships with fluid intelligence, executive functioning, empathy, and digital habits. Overall, participants exhibited better than chance text discrimination, with substantial variation across individuals. Fluid intelligence strongly predicted differences in the ability to distinguish human from AI, but executive functioning and empathy did not. Meanwhile, heavier smartphone and social media use predicted misattribution of AI content (mistaking it for human). Determinations about the origin of encountered content also affected sharing preferences, with those who were better able to distinguish human from AI indicating a lower likelihood of sharing AI content online. Word-level differences in linguistic composition of the texts did not meaningfully influence participants' judgements. These findings inform our understanding of how individual difference factors may shape the course of human interactions with AI-generated information.
Collapse
Affiliation(s)
- J M Chein
- Department of Psychology and Neuroscience, Temple University, Weiss Hall, 1701 N. 13th St, Philadelphia, PA, 19122, USA.
| | - S A Martinez
- Department of Psychology and Neuroscience, Temple University, Weiss Hall, 1701 N. 13th St, Philadelphia, PA, 19122, USA
| | - A R Barone
- Department of Psychology and Neuroscience, Temple University, Weiss Hall, 1701 N. 13th St, Philadelphia, PA, 19122, USA
| |
Collapse
|
4
|
Navarro Martínez O, Fernández-García D, Cuartero Monteagudo N, Forero-Rincón O. Possible Health Benefits and Risks of DeepFake Videos: A Qualitative Study in Nursing Students. NURSING REPORTS 2024; 14:2746-2757. [PMID: 39449440 PMCID: PMC11503397 DOI: 10.3390/nursrep14040203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 09/20/2024] [Accepted: 10/01/2024] [Indexed: 10/26/2024] Open
Abstract
BACKGROUND "DeepFakes" are synthetic performances created by AI, using neural networks to exchange faces in images and modify voices. OBJECTIVE Due to the novelty and limited literature on its risks/benefits, this paper aims to determine how young nursing students perceive DeepFake technology, its ethical implications, and its potential benefits in nursing. METHODS This qualitative study used thematic content analysis (the Braun and Clarke method) with videos recorded by 50 third-year nursing students, who answered three questions about DeepFake technology. The data were analyzed using ATLAS.ti (version 22), and the project was approved by the Ethics Committee (code UCV/2021-2022/116). RESULTS Data analysis identified 21 descriptive codes, classified into four main themes: advantages, disadvantages, health applications, and ethical dilemmas. Benefits noted by students include use in diagnosis, patient accompaniment, training, and learning. Perceived risks include cyberbullying, loss of identity, and negative psychological impacts from unreal memories. CONCLUSIONS Nursing students see both pros and cons in DeepFake technology and are aware of the ethical dilemmas it poses. They also identified promising healthcare applications that could enhance nurses' leadership in digital health, stressing the importance of regulation and education to fully leverage its potential.
Collapse
Affiliation(s)
- Olga Navarro Martínez
- Nursing Education and Care Research Group (GRIECE), Nursing Department, Faculty of Nursing and Podiatry, Universitat de València, Menéndez y Pelayo, 19, 46010 Valencia, Spain;
| | - David Fernández-García
- Faculty of Medicine and Health Sciences, Catholic University of Valencia San Vicente Mártir, C/Espartero 7, 46007 Valencia, Spain; (D.F.-G.); (O.F.-R.)
| | - Noemí Cuartero Monteagudo
- Faculty of Medicine and Health Sciences, Catholic University of Valencia San Vicente Mártir, C/Espartero 7, 46007 Valencia, Spain; (D.F.-G.); (O.F.-R.)
- Nursing Department, Faculty of Nursing and Podiatry, Universitat de València, Menéndez y Pelayo, 19, 46010 Valencia, Spain
| | - Olga Forero-Rincón
- Faculty of Medicine and Health Sciences, Catholic University of Valencia San Vicente Mártir, C/Espartero 7, 46007 Valencia, Spain; (D.F.-G.); (O.F.-R.)
| |
Collapse
|
5
|
Becker C, Conduit R, Chouinard PA, Laycock R. Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli. Behav Res Methods 2024; 56:7674-7690. [PMID: 38834812 PMCID: PMC11362322 DOI: 10.3758/s13428-024-02443-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2024] [Indexed: 06/06/2024]
Abstract
Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.
Collapse
|
6
|
Purcell ZA, Dong M, Nussberger AM, Köbis N, Jakesch M. People have different expectations for their own versus others' use of AI-mediated communication tools. Br J Psychol 2024. [PMID: 39230876 DOI: 10.1111/bjop.12727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 07/23/2024] [Indexed: 09/05/2024]
Abstract
Artificial intelligence (AI) can enhance human communication, for example, by improving the quality of our writing, voice or appearance. However, AI mediated communication also has risks-it may increase deception, compromise authenticity or yield widespread mistrust. As a result, both policymakers and technology firms are developing approaches to prevent and reduce potentially unacceptable uses of AI communication technologies. However, we do not yet know what people believe is acceptable or what their expectations are regarding usage. Drawing on normative psychology theories, we examine people's judgements of the acceptability of open and secret AI use, as well as people's expectations of their own and others' use. In two studies with representative samples (Study 1: N = 477; Study 2: N = 765), we find that people are less accepting of secret than open AI use in communication, but only when directly compared. Our results also suggest that people believe others will use AI communication tools more than they would themselves and that people do not expect others' use to align with their expectations of what is acceptable. While much attention has been focused on transparency measures, our results suggest that self-other differences are a central factor for understanding people's attitudes and expectations for AI-mediated communication.
Collapse
Affiliation(s)
| | - Mengchen Dong
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Anne-Marie Nussberger
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Nils Köbis
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | | |
Collapse
|
7
|
Groh M, Sankaranarayanan A, Singh N, Kim DY, Lippman A, Picard R. Human detection of political speech deepfakes across transcripts, audio, and video. Nat Commun 2024; 15:7629. [PMID: 39223110 PMCID: PMC11368926 DOI: 10.1038/s41467-024-51998-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 08/22/2024] [Indexed: 09/04/2024] Open
Abstract
Recent advances in technology for hyper-realistic visual and audio effects provoke the concern that deepfake videos of political speeches will soon be indistinguishable from authentic video. We conduct 5 pre-registered randomized experiments with N = 2215 participants to evaluate how accurately humans distinguish real political speeches from fabrications across base rates of misinformation, audio sources, question framings with and without priming, and media modalities. We do not find base rates of misinformation have statistically significant effects on discernment. We find deepfakes with audio produced by the state-of-the-art text-to-speech algorithms are harder to discern than the same deepfakes with voice actor audio. Moreover across all experiments and question framings, we find audio and visual information enables more accurate discernment than text alone: human discernment relies more on how something is said, the audio-visual cues, than what is said, the speech content.
Collapse
Affiliation(s)
- Matthew Groh
- Kellogg School of Management, Northwestern University, Evanston, IL, USA.
| | - Aruna Sankaranarayanan
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
- CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Nikhil Singh
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dong Young Kim
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Andrew Lippman
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rosalind Picard
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
8
|
Lintner T. A systematic review of AI literacy scales. NPJ SCIENCE OF LEARNING 2024; 9:50. [PMID: 39107327 PMCID: PMC11303566 DOI: 10.1038/s41539-024-00264-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 07/26/2024] [Indexed: 08/10/2024]
Abstract
With the opportunities and challenges stemming from the artificial intelligence developments and its integration into society, AI literacy becomes a key concern. Utilizing quality AI literacy instruments is crucial for understanding and promoting AI literacy development. This systematic review assessed the quality of AI literacy scales using the COSMIN tool aiming to aid researchers in choosing instruments for AI literacy assessment. This review identified 22 studies validating 16 scales targeting various populations including general population, higher education students, secondary education students, and teachers. Overall, the scales demonstrated good structural validity and internal consistency. On the other hand, only a few have been tested for content validity, reliability, construct validity, and responsiveness. None of the scales have been tested for cross-cultural validity and measurement error. Most studies did not report any interpretability indicators and almost none had raw data available. There are 3 performance-based scale available, compared to 13 self-report scales.
Collapse
Affiliation(s)
- Tomáš Lintner
- Department of Educational Sciences, Faculty of Arts, Masaryk University, Brno, Czech Republic.
- Institute SYRI, Brno, Czech Republic.
| |
Collapse
|
9
|
Chein J, Martinez S, Barone A. Can human intelligence safeguard against artificial intelligence? Exploring individual differences in the discernment of human from AI texts. RESEARCH SQUARE 2024:rs.3.rs-4277893. [PMID: 38746113 PMCID: PMC11092869 DOI: 10.21203/rs.3.rs-4277893/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Artificial intelligence (AI) models can produce output that closely mimics human-generated content. We examined individual differences in the human ability to differentiate human- from AI-generated texts, exploring relationships with fluid intelligence, executive functioning, empathy, and digital habits. Overall, participants exhibited better than chance text discrimination, with substantial variation across individuals. Fluid intelligence strongly predicted differences in the ability to distinguish human from AI, but executive functioning and empathy did not. Meanwhile, heavier smartphone and social media use predicted misattribution of AI content (mistaking it for human). Determinations about the origin of encountered content also affected sharing preferences, with those who were better able to distinguish human from AI indicating a lower likelihood of sharing AI content online. Word-level differences in linguistic composition of the texts did not meaningfully influence participants' judgements. These findings inform our understanding of how individual difference factors may shape the course of human interactions with AI-generated information.
Collapse
|
10
|
Newman EJ, Schwarz N. Misinformed by images: How images influence perceptions of truth and what can be done about it. Curr Opin Psychol 2024; 56:101778. [PMID: 38134526 DOI: 10.1016/j.copsyc.2023.101778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 11/29/2023] [Accepted: 11/30/2023] [Indexed: 12/24/2023]
Abstract
We organize image types by their substantive relationship with textual claims and discuss their impact on attention, comprehension, memory, and judgment. Photos do not need to be false (altered or generated) to mislead; real photos can create a slanted representation or be repurposed from different events. Even semantically related non-probative photos, merely inserted to attract eyeballs, can increase message acceptance through increased fluency. Messages with images receive more attention and reach a wider audience. Text-congruent images can scaffold the comprehension of true and false claims and support the formation of correct and false memories. Standard laboratory procedures may underestimate the impact of images in natural media contexts: by drawing all participants' attention to a message that may be ignored without an image, they inflate message effects in the control condition. Misleading images are difficult to identify and their influence often remains outside of awareness, making it hard to curb their influence through critical-thinking interventions. Current concerns about deep fakes may reduce trust in all images, potentially limiting their power to mislead as well as inform. More research is needed to understand how knowing that an image is misleading influences inferences, impressions, and judgments beyond immediate assessments of the image's credibility.
Collapse
Affiliation(s)
- Eryn J Newman
- School of Medicine and Psychology, The Australian National University, Canberra, Australia.
| | - Norbert Schwarz
- Mind and Society Center, University of Southern California, Los Angeles, USA; Department of Psychology, University of Southern California, Los Angeles, USA; Marshall School of Business, University of Southern California, Los Angeles, USA.
| |
Collapse
|
11
|
Rapp DN, Withall MM. Confidence as a metacognitive contributor to and consequence of misinformation experiences. Curr Opin Psychol 2024; 55:101735. [PMID: 38041918 DOI: 10.1016/j.copsyc.2023.101735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 11/03/2023] [Accepted: 11/10/2023] [Indexed: 12/04/2023]
Abstract
Exposures to inaccurate information can lead people to become confused about what is true, to doubt their understandings, and to rely on the ideas later. Recent work has begun to investigate the role of metacognition in these effects. We review research foregrounding confidence as an exemplar metacognitive contributor to misinformation experiences. Miscalibrations between confidence about what one knows, and the actual knowledge one possesses, can help explain why people might hold fast to misinformed beliefs even in the face of counterevidence. Miscalibrations can also emerge after brief exposures to new misinformation, allowing even obvious inaccuracies to influence subsequent performance. Evidence additionally suggests confidence may present a useful target for intervention, helping to encourage careful evaluation under the right conditions.
Collapse
Affiliation(s)
- David N Rapp
- Department of Psychology, Northwestern University, Evanston, IL, USA; School of Education and Social Policy, Northwestern University, Evanston, IL, USA.
| | - Mandy M Withall
- Department of Psychology, Northwestern University, Evanston, IL, USA
| |
Collapse
|
12
|
Maria Pizzoli SF, Vergani L, Monzani D, Scotto L, Cincidda C, Pravettoni G. The Sound of Grief: A Critical Discussion on the Experience of Creating and Listening to the Digitally Reproduced Voice of the Deceived. OMEGA-JOURNAL OF DEATH AND DYING 2024:302228231225273. [PMID: 38176688 DOI: 10.1177/00302228231225273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2024]
Abstract
Technological tools allow for the reproduction and control of peculiar stimuli, such as the possibility of producing audio clips with the voices of deceased people. Artificial intelligence allows to create at-home vocal messages from an audioclip. Recently, some videos and documentaries depicting people interacting with artificial intelligence content related to the deceased have been released to the general public. However, the possibility of interacting with realistic stimuli related to deceased loved ones can create peculiar and delicate experiences and should gain the attention of the scientific community and mental health professionals. Listening and searching for experiences related to the deceived ones might indicate a natural way to elaborate and live the experience of grieving or the presence of symptoms related to more severe conditions. Moreover, such powerful stimuli might be potentially harmful to users, if not appropriately used. To the best of our knowledge, no scientific literature exists on the topic of listening to audio clips with the voice of the deceased yet, although various people shared thoughts and feelings about these habits on social networks and forums. Given the relevant psychological impact that grief can have on a person, an open discussion on the possibility and risks of the availability of digital stimuli related to grief should be taken into account by the scientific community.
Collapse
Affiliation(s)
- Silvia Francesca Maria Pizzoli
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Department of Psychology, Univeristà Cattolica Del Sacro Cuore di Milano, Milan, Italy
| | - Laura Vergani
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Dario Monzani
- Department of Psychology, Educational Science and Human Movement, University of Palermo, Palermo, Italy
| | - Ludovica Scotto
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Clizia Cincidda
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Gabriella Pravettoni
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, Milan, Italy
| |
Collapse
|
13
|
Cowles K, Miller R, Suppok R. When Seeing Isn't Believing: Navigating Visual Health Misinformation through Library Instruction. Med Ref Serv Q 2024; 43:44-58. [PMID: 38237023 DOI: 10.1080/02763869.2024.2290963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Visual misinformation poses unique challenges to public health due to its potential for persuasiveness and rapid spread on social media. In this article, librarians at the University of Pittsburgh Health Sciences Library System identify four types of visual health misinformation: misleading graphs and charts, out of context visuals, image manipulation in scientific publications, and AI-generated images and videos. To educate our campus's health sciences audience and wider community on these topics, we have developed a range of instruction about visual health misinformation. We describe our strategies and provide suggestions for implementing visual misinformation programming for a variety of audiences.
Collapse
Affiliation(s)
- Kelsey Cowles
- Health Sciences Library System, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Rebekah Miller
- Health Sciences Library System, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Rachel Suppok
- Health Sciences Library System, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
14
|
Abraham J, Putra HA, Prayoga T, Warnars HLHS, Manurung RH, Nainggolan T. Prediction of self-efficacy in recognizing deepfakes based on personality traits . F1000Res 2023; 11:1529. [PMID: 38098756 PMCID: PMC10719557 DOI: 10.12688/f1000research.128915.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/09/2023] [Indexed: 12/17/2023] Open
Abstract
Background: While deepfake technology is still relatively new, concerns are increasing as they are getting harder to spot. The first question we need to ask is how good humans are at recognizing deepfakes - the realistic-looking videos or images that show people doing or saying things that they never actually did or said generated by an artificial intelligence-based technology. Research has shown that an individual's self-efficacy correlates with their ability to detect deepfakes. Previous studies suggest that one of the most fundamental predictors of self-efficacy are personality traits. In this study, we ask the question: how can people's personality traits influence their efficacy in recognizing deepfakes? Methods: Predictive correlational design with a multiple linear regression data analysis technique was used in this study. The participants of this study were 200 Indonesian young adults. Results: The results showed that only traits of Honesty-humility and Agreeableness were able to predict the efficacy, in the negative and positive directions, respectively. Meanwhile, traits of Emotionality, Extraversion, Conscientiousness, and Openness cannot predict it. Conclusion: Self-efficacy in spotting deepfakes can be predicted by certain personality traits.
Collapse
Affiliation(s)
- Juneman Abraham
- Psychology Department, Faculty of Humanities, Bina Nusantara University, Jakarta, 11480, Indonesia
| | - Heru Alamsyah Putra
- Psychology Department, Faculty of Humanities, Bina Nusantara University, Jakarta, 11480, Indonesia
| | | | | | - Rudi Hartono Manurung
- Japanese Department, Faculty of Humanities, Bina Nusantara University, Jakarta, 11480, Indonesia
| | - Togiaratua Nainggolan
- Research Center for Social Welfare, Village, and Connectivity, National Research and Innovation Agency, Jakarta, 10340, Indonesia
| |
Collapse
|
15
|
Ahmed S, Chua HW. Perception and deception: Exploring individual responses to deepfakes across different modalities. Heliyon 2023; 9:e20383. [PMID: 37810833 PMCID: PMC10556585 DOI: 10.1016/j.heliyon.2023.e20383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 09/14/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023] Open
Abstract
This study is one of the first to investigate the relationship between modalities and individuals' tendencies to believe and share different forms of deepfakes (also deep fakes). Using an online survey experiment conducted in the US, participants were randomly assigned to one of three disinformation conditions: video deepfakes, audio deepfakes, and cheap fakes to test the effect of single modality against multimodality and how it affects individuals' perceived claim accuracy and sharing intentions. In addition, the impact of cognitive ability on perceived claim accuracy and sharing intentions between conditions are also examined. The results suggest that individuals are likelier to perceive video deepfakes as more accurate than cheap fakes, but not audio deepfakes. Yet, individuals are more likely to share video deepfakes than cheap and audio deepfakes. We also found that individuals with high cognitive ability are less likely to perceive deepfakes as accurate or share them across formats. The findings emphasize that deepfakes are not monolithic, and associated modalities should be considered when studying user engagement with deepfakes.
Collapse
|
16
|
Mai KT, Bray S, Davies T, Griffin LD. Warning: Humans cannot reliably detect speech deepfakes. PLoS One 2023; 18:e0285333. [PMID: 37531336 PMCID: PMC10395974 DOI: 10.1371/journal.pone.0285333] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 06/30/2023] [Indexed: 08/04/2023] Open
Abstract
Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabilities are limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to identify the deepfakes. We ran our experiments in English and Mandarin to understand if language affects detection performance and decision-making rationale. We found that detection capability is unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no difference in detectability between the two languages. Increasing listener awareness by providing examples of speech deepfakes only improves results slightly. As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder. The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defenses against this threat are needed.
Collapse
Affiliation(s)
- Kimberly T Mai
- Department of Security and Crime Science, University College London, London, United Kingdom
- Department of Computer Science, University College London, London, United Kingdom
| | - Sergi Bray
- Department of Security and Crime Science, University College London, London, United Kingdom
- Department of Computer Science, University College London, London, United Kingdom
| | - Toby Davies
- Department of Security and Crime Science, University College London, London, United Kingdom
| | - Lewis D Griffin
- Department of Computer Science, University College London, London, United Kingdom
| |
Collapse
|
17
|
Lu H, Chu H. Let the dead talk: How deepfake resurrection narratives influence audience response in prosocial contexts. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
18
|
Eberl A, Kühn J, Wolbring T. Using deepfakes for experiments in the social sciences - A pilot study. FRONTIERS IN SOCIOLOGY 2022; 7:907199. [PMID: 36524213 PMCID: PMC9745035 DOI: 10.3389/fsoc.2022.907199] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 10/20/2022] [Indexed: 06/17/2023]
Abstract
The advent of deepfakes - the manipulation of audio records, images and videos based on deep learning techniques - has important implications for science and society. Current studies focus primarily on the detection and dangers of deepfakes. In contrast, less attention is paid to the potential of this technology for substantive research - particularly as an approach for controlled experimental manipulations in the social sciences. In this paper, we aim to fill this research gap and argue that deepfakes can be a valuable tool for conducting social science experiments. To demonstrate some of the potentials and pitfalls of deepfakes, we conducted a pilot study on the effects of physical attractiveness on student evaluations of teachers. To this end, we created a deepfake video varying the physical attractiveness of the instructor as compared to the original video and asked students to rate the presentation and instructor. First, our results show that social scientists without special knowledge in computer science can successfully create a credible deepfake within reasonable time. Student ratings of the quality of the two videos were comparable and students did not detect the deepfake. Second, we use deepfakes to examine a substantive research question: whether there are differences in the ratings of a physically more and a physically less attractive instructor. Our suggestive evidence points toward a beauty penalty. Thus, our study supports the idea that deepfakes can be used to introduce systematic variations into experiments while offering a high degree of experimental control. Finally, we discuss the feasibility of deepfakes as an experimental manipulation and the ethical challenges of using deepfakes in experiments.
Collapse
|
19
|
An error management approach to perceived fakeness of deepfakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03621-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
20
|
Wang S, Kim S. Users’ emotional and behavioral responses to deepfake videos of K-pop idols. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|