1
|
Barrington S, Cooper EA, Farid H. People are poorly equipped to detect AI-powered voice clones. Sci Rep 2025; 15:11004. [PMID: 40164656 PMCID: PMC11958761 DOI: 10.1038/s41598-025-94170-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2024] [Accepted: 03/12/2025] [Indexed: 04/02/2025] Open
Abstract
As generative artificial intelligence (AI) continues its ballistic trajectory, everything from text to audio, image, and video generation continues to improve at mimicking human-generated content. Through a series of perceptual studies, we report on the realism of AI-generated voices in terms of identity matching and naturalness. We find human participants cannot consistently identify recordings of AI-generated voices. Specifically, participants perceived the identity of an AI-generated voice to be the same as its real counterpart approximately [Formula: see text] of the time, and correctly identified a voice as AI generated only about [Formula: see text] of the time.
Collapse
Affiliation(s)
- Sarah Barrington
- School of Information, University of California, Berkeley, CA, 94720, USA
| | - Emily A Cooper
- Herbert Wertheim School of Optometry, University of California, Berkeley, CA, 94720, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, 94720, USA
| | - Hany Farid
- School of Information, University of California, Berkeley, CA, 94720, USA.
- Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, 94720, USA.
| |
Collapse
|
2
|
Zhang B, Cui H, Nguyen V, Whitty M. Audio Deepfake Detection: What Has Been Achieved and What Lies Ahead. SENSORS (BASEL, SWITZERLAND) 2025; 25:1989. [PMID: 40218502 PMCID: PMC11991371 DOI: 10.3390/s25071989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2025] [Revised: 02/11/2025] [Accepted: 03/20/2025] [Indexed: 04/14/2025]
Abstract
Advancements in audio synthesis and manipulation technologies have reshaped applications such as personalised virtual assistants, voice cloning for creative content, and language learning tools. However, the misuse of these technologies to create audio deepfakes has raised serious concerns about security, privacy, and trust. Studies reveal that human judgement of deepfake audio is not always reliable, highlighting the urgent need for robust detection technologies to mitigate these risks. This paper provides a comprehensive survey of recent advancements in audio deepfake detection, with a focus on cutting-edge developments in the past few years. It begins by exploring the foundational methods of audio deepfake generation, including text-to-speech (TTS) and voice conversion (VC), followed by a review of datasets driving progress in the field. The survey then delves into detection approaches, covering frontend feature extraction, backend classification models, and end-to-end systems. Additionally, emerging topics such as privacy-preserving detection, explainability, and fairness are discussed. Finally, this paper identifies key challenges and outlines future directions for developing robust and scalable audio deepfake detection systems.
Collapse
Affiliation(s)
| | - Hui Cui
- Department of Software Systems and Cybersecurity, Faculty of IT, Monash University, Melbourne, VIC 3800, Australia; (B.Z.); (V.N.); (M.W.)
| | | | | |
Collapse
|
3
|
Herrmann B, Cui ME. Impaired Prosodic Processing but Not Hearing Function Is Associated with an Age-Related Reduction in AI Speech Recognition. Audiol Res 2025; 15:14. [PMID: 39997158 PMCID: PMC11852301 DOI: 10.3390/audiolres15010014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2024] [Revised: 01/22/2025] [Accepted: 02/05/2025] [Indexed: 02/26/2025] Open
Abstract
BACKGROUND/OBJECTIVES Voice artificial intelligence (AI) technology is becoming increasingly common. Recent work indicates that middle-aged to older adults are less able to identify modern AI speech compared to younger adults, but the underlying causes are unclear. METHODS The current study with younger and middle-aged to older adults investigated factors that could explain the age-related reduction in AI speech identification. Experiment 1 investigated whether high-frequency information in speech-to which middle-aged to older adults often have less access due sensitivity loss at high frequencies-contributes to age-group differences. Experiment 2 investigated whether an age-related reduction in the ability to process prosodic information in speech predicts the reduction in AI speech identification. RESULTS Results for Experiment 1 show that middle-aged to older adults are less able to identify AI speech for both full-bandwidth speech and speech for which information above 4 kHz is removed, making the contribution of high-frequency hearing loss unlikely. Experiment 2 shows that the ability to identify AI speech is greater in individuals who also show a greater ability to identify emotions from prosodic speech information, after accounting for hearing function and self-rated experience with voice-AI systems. CONCLUSIONS The current results suggest that the ability to identify AI speech is related to the accurate processing of prosodic information.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, 3560 Bathurst St., North York, ON M6A 2E1, Canada;
- Department of Psychology, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Mo Eric Cui
- Rotman Research Institute, Baycrest Academy for Research and Education, 3560 Bathurst St., North York, ON M6A 2E1, Canada;
- Department of Psychology, University of Toronto, Toronto, ON M5S 1A1, Canada
| |
Collapse
|
4
|
Kim J, Vajravelu BN. Assessing the Current Limitations of Large Language Models in Advancing Health Care Education. JMIR Form Res 2025; 9:e51319. [PMID: 39819585 PMCID: PMC11756841 DOI: 10.2196/51319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/31/2024] [Accepted: 09/03/2024] [Indexed: 01/19/2025] Open
Abstract
Unlabelled The integration of large language models (LLMs), as seen with the generative pretrained transformers series, into health care education and clinical management represents a transformative potential. The practical use of current LLMs in health care sparks great anticipation for new avenues, yet its embracement also elicits considerable concerns that necessitate careful deliberation. This study aims to evaluate the application of state-of-the-art LLMs in health care education, highlighting the following shortcomings as areas requiring significant and urgent improvements: (1) threats to academic integrity, (2) dissemination of misinformation and risks of automation bias, (3) challenges with information completeness and consistency, (4) inequity of access, (5) risks of algorithmic bias, (6) exhibition of moral instability, (7) technological limitations in plugin tools, and (8) lack of regulatory oversight in addressing legal and ethical challenges. Future research should focus on strategically addressing the persistent challenges of LLMs highlighted in this paper, opening the door for effective measures that can improve their application in health care education.
Collapse
Affiliation(s)
- JaeYong Kim
- School of Pharmacy, Massachusetts College of Pharmacy and Health Sciences, Boston, MA, United States
| | - Bathri Narayan Vajravelu
- Department of Physician Assistant Studies, Massachusetts College of Pharmacy and Health Sciences, 179 Longwood Avenue, Boston, MA, 02115, United States, 1 6177322961
| |
Collapse
|
5
|
Patil S, Licari FW. Deepfakes in health care: Decoding digital deceptions. J Am Dent Assoc 2024; 155:997-999. [PMID: 38727646 DOI: 10.1016/j.adaj.2024.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 04/14/2024] [Accepted: 04/17/2024] [Indexed: 12/06/2024]
|
6
|
Roswandowitz C, Kathiresan T, Pellegrino E, Dellwo V, Frühholz S. Cortical-striatal brain network distinguishes deepfake from real speaker identity. Commun Biol 2024; 7:711. [PMID: 38862808 PMCID: PMC11166919 DOI: 10.1038/s42003-024-06372-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 05/22/2024] [Indexed: 06/13/2024] Open
Abstract
Deepfakes are viral ingredients of digital environments, and they can trick human cognition into misperceiving the fake as real. Here, we test the neurocognitive sensitivity of 25 participants to accept or reject person identities as recreated in audio deepfakes. We generate high-quality voice identity clones from natural speakers by using advanced deepfake technologies. During an identity matching task, participants show intermediate performance with deepfake voices, indicating levels of deception and resistance to deepfake identity spoofing. On the brain level, univariate and multivariate analyses consistently reveal a central cortico-striatal network that decoded the vocal acoustic pattern and deepfake-level (auditory cortex), as well as natural speaker identities (nucleus accumbens), which are valued for their social relevance. This network is embedded in a broader neural identity and object recognition network. Humans can thus be partly tricked by deepfakes, but the neurocognitive mechanisms identified during deepfake processing open windows for strengthening human resilience to fake information.
Collapse
Affiliation(s)
- Claudia Roswandowitz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland.
- Neuroscience Centre Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Thayabaran Kathiresan
- Centre for Neuroscience of Speech, University Melbourne, Melbourne, Australia
- Redenlab, Melbourne, Australia
| | - Elisa Pellegrino
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Volker Dellwo
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland
- Neuroscience Centre Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
7
|
Kulangareth NV, Kaufman J, Oreskovic J, Fossat Y. Investigation of Deepfake Voice Detection Using Speech Pause Patterns: Algorithm Development and Validation. JMIR BIOMEDICAL ENGINEERING 2024; 9:e56245. [PMID: 38875685 PMCID: PMC11041410 DOI: 10.2196/56245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 01/31/2024] [Accepted: 02/17/2024] [Indexed: 06/16/2024] Open
Abstract
BACKGROUND The digital era has witnessed an escalating dependence on digital platforms for news and information, coupled with the advent of "deepfake" technology. Deepfakes, leveraging deep learning models on extensive data sets of voice recordings and images, pose substantial threats to media authenticity, potentially leading to unethical misuse such as impersonation and the dissemination of false information. OBJECTIVE To counteract this challenge, this study aims to introduce the concept of innate biological processes to discern between authentic human voices and cloned voices. We propose that the presence or absence of certain perceptual features, such as pauses in speech, can effectively distinguish between cloned and authentic audio. METHODS A total of 49 adult participants representing diverse ethnic backgrounds and accents were recruited. Each participant contributed voice samples for the training of up to 3 distinct voice cloning text-to-speech models and 3 control paragraphs. Subsequently, the cloning models generated synthetic versions of the control paragraphs, resulting in a data set consisting of up to 9 cloned audio samples and 3 control samples per participant. We analyzed the speech pauses caused by biological actions such as respiration, swallowing, and cognitive processes. Five audio features corresponding to speech pause profiles were calculated. Differences between authentic and cloned audio for these features were assessed, and 5 classical machine learning algorithms were implemented using these features to create a prediction model. The generalization capability of the optimal model was evaluated through testing on unseen data, incorporating a model-naive generator, a model-naive paragraph, and model-naive participants. RESULTS Cloned audio exhibited significantly increased time between pauses (P<.001), decreased variation in speech segment length (P=.003), increased overall proportion of time speaking (P=.04), and decreased rates of micro- and macropauses in speech (both P=.01). Five machine learning models were implemented using these features, with the AdaBoost model demonstrating the highest performance, achieving a 5-fold cross-validation balanced accuracy of 0.81 (SD 0.05). Other models included support vector machine (balanced accuracy 0.79, SD 0.03), random forest (balanced accuracy 0.78, SD 0.04), logistic regression, and decision tree (balanced accuracies 0.76, SD 0.10 and 0.72, SD 0.06). When evaluating the optimal AdaBoost model, it achieved an overall test accuracy of 0.79 when predicting unseen data. CONCLUSIONS The incorporation of perceptual, biological features into machine learning models demonstrates promising results in distinguishing between authentic human voices and cloned audio.
Collapse
|
8
|
Rapp DN, Withall MM. Confidence as a metacognitive contributor to and consequence of misinformation experiences. Curr Opin Psychol 2024; 55:101735. [PMID: 38041918 DOI: 10.1016/j.copsyc.2023.101735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 11/03/2023] [Accepted: 11/10/2023] [Indexed: 12/04/2023]
Abstract
Exposures to inaccurate information can lead people to become confused about what is true, to doubt their understandings, and to rely on the ideas later. Recent work has begun to investigate the role of metacognition in these effects. We review research foregrounding confidence as an exemplar metacognitive contributor to misinformation experiences. Miscalibrations between confidence about what one knows, and the actual knowledge one possesses, can help explain why people might hold fast to misinformed beliefs even in the face of counterevidence. Miscalibrations can also emerge after brief exposures to new misinformation, allowing even obvious inaccuracies to influence subsequent performance. Evidence additionally suggests confidence may present a useful target for intervention, helping to encourage careful evaluation under the right conditions.
Collapse
Affiliation(s)
- David N Rapp
- Department of Psychology, Northwestern University, Evanston, IL, USA; School of Education and Social Policy, Northwestern University, Evanston, IL, USA.
| | - Mandy M Withall
- Department of Psychology, Northwestern University, Evanston, IL, USA
| |
Collapse
|
9
|
Wilson A, Wilkes S, Teramoto Y, Hale S. Multimodal analysis of disinformation and misinformation. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230964. [PMID: 38126058 PMCID: PMC10731323 DOI: 10.1098/rsos.230964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 11/22/2023] [Indexed: 12/23/2023]
Abstract
The use of disinformation and misinformation campaigns in the media has attracted much attention from academics and policy-makers. Multimodal analysis or the analysis of two or more semiotic systems-language, gestures, images, sounds, among others-in their interrelation and interaction is essential to understanding dis-/misinformation efforts because most human communication goes beyond just words. There is a confluence of many disciplines (e.g. computer science, linguistics, political science, communication studies) that are developing methods and analytical models of multimodal communication. This literature review brings research strands from these disciplines together, providing a map of the multi- and interdisciplinary landscape for multimodal analysis of dis-/misinformation. It records the substantial growth starting from the second quarter of 2020-the start of the COVID-19 epidemic in Western Europe-in the number of studies on multimodal dis-/misinformation coming from the field of computer science. The review examines that category of studies in more detail. Finally, the review identifies gaps in multimodal research on dis-/misinformation and suggests ways to bridge these gaps including future cross-disciplinary research directions. Our review provides scholars from different disciplines working on dis-/misinformation with a much needed bird's-eye view of the rapidly emerging research of multimodal dis-/misinformation.
Collapse
Affiliation(s)
- Anna Wilson
- Oxford School of Global and Area Studies, University of Oxford, Oxford OX1 2JD, UK
| | - Seb Wilkes
- Department of Physics, University of Oxford, Oxford, UK
| | | | - Scott Hale
- Oxford Internet Institute, University of Oxford, Oxford, UK
| |
Collapse
|
10
|
Ahmed S, Chua HW. Perception and deception: Exploring individual responses to deepfakes across different modalities. Heliyon 2023; 9:e20383. [PMID: 37810833 PMCID: PMC10556585 DOI: 10.1016/j.heliyon.2023.e20383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 09/14/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023] Open
Abstract
This study is one of the first to investigate the relationship between modalities and individuals' tendencies to believe and share different forms of deepfakes (also deep fakes). Using an online survey experiment conducted in the US, participants were randomly assigned to one of three disinformation conditions: video deepfakes, audio deepfakes, and cheap fakes to test the effect of single modality against multimodality and how it affects individuals' perceived claim accuracy and sharing intentions. In addition, the impact of cognitive ability on perceived claim accuracy and sharing intentions between conditions are also examined. The results suggest that individuals are likelier to perceive video deepfakes as more accurate than cheap fakes, but not audio deepfakes. Yet, individuals are more likely to share video deepfakes than cheap and audio deepfakes. We also found that individuals with high cognitive ability are less likely to perceive deepfakes as accurate or share them across formats. The findings emphasize that deepfakes are not monolithic, and associated modalities should be considered when studying user engagement with deepfakes.
Collapse
|