1
|
Ching D, Twomey J, Aylett MP, Quayle M, Linehan C, Murphy G. Can deepfakes manipulate us? Assessing the evidence via a critical scoping review. PLoS One 2025; 20:e0320124. [PMID: 40315197 PMCID: PMC12047760 DOI: 10.1371/journal.pone.0320124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 02/13/2025] [Indexed: 05/04/2025] Open
Abstract
Deepfakes are one of the most recent developments in misinformation technology and are capable of superimposing one person's face onto another in video format. The potential of this technology to defame and cause harm is clear. However, despite the grave concerns expressed about deepfakes, these concerns are rarely accompanied with empirical evidence. We present a scoping review of the existing empirical studies that aim to investigate the effects of viewing deepfakes on people's beliefs, memories, and behaviour. Five databases were searched, producing an initial sample of 2004 papers, from which 22 relevant papers were identified, varying in methodology and research methods used. Overall, we found that the early studies on this topic have often produced inconclusive findings regarding the existence of uniquely persuasive or convincing effects of deepfake exposure. Moreover, many experiments demonstrated poor methodology and did not include a non-deepfake comparator (e.g., text-based misinformation). We conclude that speculation and scare mongering about dystopian uses of deepfake technologies has far outpaced experimental research that assess these harms. We close by offering insights on how to conduct improved empirical work in this area.
Collapse
Affiliation(s)
- Didier Ching
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - John Twomey
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - Matthew P. Aylett
- CereProc Ltd, Edinburgh, United Kingdom
- University of Heriot Watt, Edinburgh, United Kingdom
| | - Michael Quayle
- Lero the Research Ireland Centre for Software, Limerick, Ireland
- Centre for Social Issues Research and Department of Psychology, University of Limerick, Limerick, Ireland
- Department of Psychology, School of Applied Human Sciences, University of KwaZulu-Natal, Pietermaritzburg, South Africa
| | - Conor Linehan
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| | - Gillian Murphy
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero the Research Ireland Centre for Software, Limerick, Ireland
| |
Collapse
|
2
|
Caci B, Giordano G, Alesi M, Gentile A, Agnello C, Lo Presti L, La Cascia M, Ingoglia S, Inguglia C, Volpes A, Monzani D. The public mental representations of deepfake technology: An in-depth qualitative exploration through Quora text data analysis. PLoS One 2024; 19:e0313605. [PMID: 39775334 PMCID: PMC11684586 DOI: 10.1371/journal.pone.0313605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 10/03/2024] [Indexed: 01/11/2025] Open
Abstract
The advent of deepfake technology has raised significant concerns regarding its impact on individuals' cognitive processes and beliefs, considering the pervasive relationships between technology and human cognition. This study delves into the psychological literature surrounding deepfakes, focusing on people's public representation of this emerging technology and highlighting prevailing themes, opinions, and emotions. Under the media framing, the theoretical framework is crucial in shaping individuals' cognitive schemas regarding technology. A qualitative method has been applied to unveil patterns, correlations, and recurring themes of beliefs about the main topic, deepfake, discussed on the forum Quora. The final extracted text corpus consisted of 166 answers to 17 questions. Analysis results highlighted the 20 most prevalent critical lemmas, and deepfake was the main one. Moreover, co-occurrence analysis identified words frequently appearing with the lemma deepfake, including video, create, and artificial intelligence-finally, thematic analysis identified eight main themes within the deepfake corpus. Cognitive processes rely on critical thinking skills in detecting anomalies in fake videos or discerning between the negative and positive impacts of deepfakes from an ethical point of view. Moreover, people adapt their beliefs and mental schemas concerning the representation of technology. Future studies should explore the role of media literacy in helping individuals to identify deepfake content since people may not be familiar with the concept of deepfakes or may not fully understand the negative or positive implications. Increased awareness and understanding of technology can empower individuals to evaluate critically the media related to Artificial Intelligence.
Collapse
Affiliation(s)
- Barbara Caci
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Giulia Giordano
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Marianna Alesi
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Ambra Gentile
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Chiara Agnello
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | | | - Marco La Cascia
- Department of Engineering, University of Palermo, Palermo, Italy
| | - Sonia Ingoglia
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | - Cristiano Inguglia
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
| | | | - Dario Monzani
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
- Applied Research Division for Cognitive and Psychological Science, European Institute of Oncology IRCCS, IEO, Milan, Italy
| |
Collapse
|
3
|
Navarro Martínez O, Fernández-García D, Cuartero Monteagudo N, Forero-Rincón O. Possible Health Benefits and Risks of DeepFake Videos: A Qualitative Study in Nursing Students. NURSING REPORTS 2024; 14:2746-2757. [PMID: 39449440 PMCID: PMC11503397 DOI: 10.3390/nursrep14040203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 09/20/2024] [Accepted: 10/01/2024] [Indexed: 10/26/2024] Open
Abstract
BACKGROUND "DeepFakes" are synthetic performances created by AI, using neural networks to exchange faces in images and modify voices. OBJECTIVE Due to the novelty and limited literature on its risks/benefits, this paper aims to determine how young nursing students perceive DeepFake technology, its ethical implications, and its potential benefits in nursing. METHODS This qualitative study used thematic content analysis (the Braun and Clarke method) with videos recorded by 50 third-year nursing students, who answered three questions about DeepFake technology. The data were analyzed using ATLAS.ti (version 22), and the project was approved by the Ethics Committee (code UCV/2021-2022/116). RESULTS Data analysis identified 21 descriptive codes, classified into four main themes: advantages, disadvantages, health applications, and ethical dilemmas. Benefits noted by students include use in diagnosis, patient accompaniment, training, and learning. Perceived risks include cyberbullying, loss of identity, and negative psychological impacts from unreal memories. CONCLUSIONS Nursing students see both pros and cons in DeepFake technology and are aware of the ethical dilemmas it poses. They also identified promising healthcare applications that could enhance nurses' leadership in digital health, stressing the importance of regulation and education to fully leverage its potential.
Collapse
Affiliation(s)
- Olga Navarro Martínez
- Nursing Education and Care Research Group (GRIECE), Nursing Department, Faculty of Nursing and Podiatry, Universitat de València, Menéndez y Pelayo, 19, 46010 Valencia, Spain;
| | - David Fernández-García
- Faculty of Medicine and Health Sciences, Catholic University of Valencia San Vicente Mártir, C/Espartero 7, 46007 Valencia, Spain; (D.F.-G.); (O.F.-R.)
| | - Noemí Cuartero Monteagudo
- Faculty of Medicine and Health Sciences, Catholic University of Valencia San Vicente Mártir, C/Espartero 7, 46007 Valencia, Spain; (D.F.-G.); (O.F.-R.)
- Nursing Department, Faculty of Nursing and Podiatry, Universitat de València, Menéndez y Pelayo, 19, 46010 Valencia, Spain
| | - Olga Forero-Rincón
- Faculty of Medicine and Health Sciences, Catholic University of Valencia San Vicente Mártir, C/Espartero 7, 46007 Valencia, Spain; (D.F.-G.); (O.F.-R.)
| |
Collapse
|
4
|
Groh M, Sankaranarayanan A, Singh N, Kim DY, Lippman A, Picard R. Human detection of political speech deepfakes across transcripts, audio, and video. Nat Commun 2024; 15:7629. [PMID: 39223110 PMCID: PMC11368926 DOI: 10.1038/s41467-024-51998-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 08/22/2024] [Indexed: 09/04/2024] Open
Abstract
Recent advances in technology for hyper-realistic visual and audio effects provoke the concern that deepfake videos of political speeches will soon be indistinguishable from authentic video. We conduct 5 pre-registered randomized experiments with N = 2215 participants to evaluate how accurately humans distinguish real political speeches from fabrications across base rates of misinformation, audio sources, question framings with and without priming, and media modalities. We do not find base rates of misinformation have statistically significant effects on discernment. We find deepfakes with audio produced by the state-of-the-art text-to-speech algorithms are harder to discern than the same deepfakes with voice actor audio. Moreover across all experiments and question framings, we find audio and visual information enables more accurate discernment than text alone: human discernment relies more on how something is said, the audio-visual cues, than what is said, the speech content.
Collapse
Affiliation(s)
- Matthew Groh
- Kellogg School of Management, Northwestern University, Evanston, IL, USA.
| | - Aruna Sankaranarayanan
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
- CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Nikhil Singh
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dong Young Kim
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Andrew Lippman
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rosalind Picard
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
5
|
Roswandowitz C, Kathiresan T, Pellegrino E, Dellwo V, Frühholz S. Cortical-striatal brain network distinguishes deepfake from real speaker identity. Commun Biol 2024; 7:711. [PMID: 38862808 PMCID: PMC11166919 DOI: 10.1038/s42003-024-06372-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 05/22/2024] [Indexed: 06/13/2024] Open
Abstract
Deepfakes are viral ingredients of digital environments, and they can trick human cognition into misperceiving the fake as real. Here, we test the neurocognitive sensitivity of 25 participants to accept or reject person identities as recreated in audio deepfakes. We generate high-quality voice identity clones from natural speakers by using advanced deepfake technologies. During an identity matching task, participants show intermediate performance with deepfake voices, indicating levels of deception and resistance to deepfake identity spoofing. On the brain level, univariate and multivariate analyses consistently reveal a central cortico-striatal network that decoded the vocal acoustic pattern and deepfake-level (auditory cortex), as well as natural speaker identities (nucleus accumbens), which are valued for their social relevance. This network is embedded in a broader neural identity and object recognition network. Humans can thus be partly tricked by deepfakes, but the neurocognitive mechanisms identified during deepfake processing open windows for strengthening human resilience to fake information.
Collapse
Affiliation(s)
- Claudia Roswandowitz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland.
- Neuroscience Centre Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Thayabaran Kathiresan
- Centre for Neuroscience of Speech, University Melbourne, Melbourne, Australia
- Redenlab, Melbourne, Australia
| | - Elisa Pellegrino
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Volker Dellwo
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland
- Neuroscience Centre Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
6
|
Qureshi SM, Saeed A, Almotiri SH, Ahmad F, Al Ghamdi MA. Deepfake forensics: a survey of digital forensic methods for multimodal deepfake identification on social media. PeerJ Comput Sci 2024; 10:e2037. [PMID: 38855214 PMCID: PMC11157519 DOI: 10.7717/peerj-cs.2037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 04/12/2024] [Indexed: 06/11/2024]
Abstract
The rapid advancement of deepfake technology poses an escalating threat of misinformation and fraud enabled by manipulated media. Despite the risks, a comprehensive understanding of deepfake detection techniques has not materialized. This research tackles this knowledge gap by providing an up-to-date systematic survey of the digital forensic methods used to detect deepfakes. A rigorous methodology is followed, consolidating findings from recent publications on deepfake detection innovation. Prevalent datasets that underpin new techniques are analyzed. The effectiveness and limitations of established and emerging detection approaches across modalities including image, video, text and audio are evaluated. Insights into real-world performance are shared through case studies of high-profile deepfake incidents. Current research limitations around aspects like cross-modality detection are highlighted to inform future work. This timely survey furnishes researchers, practitioners and policymakers with a holistic overview of the state-of-the-art in deepfake detection. It concludes that continuous innovation is imperative to counter the rapidly evolving technological landscape enabling deepfakes.
Collapse
Affiliation(s)
| | - Atif Saeed
- Department of Computer Science, COMSATS University Islamabad, Lahore, Pakistan
| | - Sultan H. Almotiri
- Department of Cybersecurity, College of Computing, Umm Al-Qura University, Makkah City, Kingdom of Saudi Arabia
| | - Farooq Ahmad
- Department of Computer Science, COMSATS University Islamabad, Lahore, Pakistan
| | - Mohammed A. Al Ghamdi
- Department of Computer Science and Artificial Intelligence, College of Computing, Umm Al-Qura University, Makkah City, Kingdom of Saudi Arabia
| |
Collapse
|
7
|
Clemons EK, Savin A, Schreieck M, Teilmann-Lock S, Trzaskowski J, Waran R. A face of one's own: The role of an online personae in a digital age and the right to control one's own online personae in the presence of digital hacking. ELECTRONIC MARKETS 2024; 34:31. [PMID: 38699202 PMCID: PMC11060978 DOI: 10.1007/s12525-024-00713-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 04/16/2024] [Indexed: 05/05/2024]
Abstract
In the post-Covid world, our online personae have become increasingly essential mechanisms for presenting ourselves to the world. Simultaneously, new techniques for hacking online personae have become more widely available, easier to use, and more convincing. This combination, of greater reliance on online personae and easier malicious hacking, has created serious societal problems. Techniques for training users to detect false content have proved ineffective. Unfortunately, legal remedies for dealing with hacked personae have also been inadequate. Consequently, the only remaining alternative is to limit the posting of false content. In this discussion paper, we provide an overview of online personae hacking. As potential remedies, we propose to redesign search engine and social media algorithms allowing platforms to detect and restrict harmful false content and a new fundamental right for the EU Charter that would provide legal justification for platforms to protect online reputations. For those platforms that might choose not to protect online reputations, this new right would require that they do so.
Collapse
Affiliation(s)
- Eric K. Clemons
- The Wharton School, University of Pennsylvania, 3730 Walnut Street, 572 Jon M. Huntsman Hall, Philadelphia, PA 19104 USA
| | - Andrej Savin
- Department of Business Humanities and Law, Copenhagen Business School, Solbjerg Plads 3, 2000 Frederiksberg, Denmark
| | - Maximilian Schreieck
- Department of Information Systems, Production and Logistics Management, University of Innsbruck, Universitätsstraße 15, 6020 Innsbruck, Austria
| | - Stina Teilmann-Lock
- Department of Business Humanities and Law, Copenhagen Business School, Solbjerg Plads 3, 2000 Frederiksberg, Denmark
| | - Jan Trzaskowski
- Department of Business Humanities and Law, Copenhagen Business School, Solbjerg Plads 3, 2000 Frederiksberg, Denmark
| | - Ravi Waran
- Clearwater Paper Corporation, 601 W. Riverside, Suite 1100, Spokane, WA 99201 USA
| |
Collapse
|
8
|
Ratcliff JJ, Andrus T, Miller AK, Olowu F, Capellupo J. When Potential Allies and Targets Do (and Do Not) Confront Anti-Asian Prejudice: Reactions to Blatant and Subtle Prejudice During the COVID-19 Pandemic. JOURNAL OF INTERPERSONAL VIOLENCE 2023; 38:11890-11913. [PMID: 37542378 DOI: 10.1177/08862605231188057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
Anti-Asian xenophobia has exploded during the COVID-19 pandemic, after U.S. political leaders promoted anti-Asian rhetoric from its start. Confronting prejudice interrupts future perpetration of such prejudice, but confronting prejudice can only occur to the extent actions are first attributed to prejudice. Bystanders may attribute less prejudice to speech about the "Chinese Virus" than to more blatant stereotype expression, for example, and therefore be less vehement in their confrontations. Across two studies, we examined the impact of anti-Asian prejudice type (blatant, subtle, or no prejudice) and bystander race/ethnicity (White or Asian American/Pacific Islander [AAPI]), on prejudice attribution, willingness to confront, actual confrontation, and confrontation vehemence. In the context of a hiring manager justifying rejection of a Chinese applicant, we predicted that blatant prejudice would be detected and confronted most willingly, and subtle prejudice more willingly than no prejudice, and that prejudice detection would mediate the relationship between prejudice type and willingness to confront. Further, we expected AAPI bystanders to detect anti-Asian prejudice more readily than White bystanders, but to confront at lower rates, with actual confrontations being more vehement following blatant (relative to subtle or no) prejudice. Analyses were conducted using SPSS 27 and the PROCESS v4.1 macro, controlling for potential confounds such as political orientation and individual-level prejudice (expressed or perceived). Results of both studies (n = 142 [Study 1], n = 274 [Study 2]) supported hypotheses, except in Study 1 bystanders exposed to subtle prejudice were no more willing to confront than no-prejudice controls. Results of exploratory analyses indicated that attribution to prejudice was the primary obstacle to confrontation following subtle prejudice, whereas action taking was the primary obstacle following blatant prejudice. This research underscores the need for interventions to increase detection of all forms of anti-Asian prejudice and to provide would-be confronters with effective confrontation tools.
Collapse
Affiliation(s)
| | - Tyra Andrus
- State University of New York, Brockport, USA
| | | | - Folake Olowu
- Ferkauf Graduate School of Psychology, New York, NY, USA
| | | |
Collapse
|
9
|
Weiss D, Liu SX, Mieczkowski H, Hancock JT. Effects of Using Artificial Intelligence on Interpersonal Perceptions of Job Applicants. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2022; 25:163-168. [PMID: 35021895 DOI: 10.1089/cyber.2020.0863] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Text-based artificial intelligence (AI) systems are increasingly integrated into a host of interpersonal domains. Although decision-making and person perception in hiring and employment opportunities have been an area of psychological interest for many years, only recently have scholars begun to investigate the role that AI plays in this context. To better understand the impact of AI in employment-related contexts, we conducted two experiments investigating how the use of AI by applicants influences their job opportunities. In our preregistered Study 1, we examined how a prospective job applicants' use of AI, as well as their language status (native English speaker or non-native English speaker), influenced participants' impressions of their warmth, competence, social attractiveness, and hiring desirability. In Study 2, we examined how receiving assistance impacted interpersonal perceptions, and how perceptions might change whether the help was provided by AI or by another human. The results from both experiments suggest that the use of AI technologies can negatively influence perceptions of jobseekers. This negative impact may be grounded in the perception of receiving any type of help, whether it be from a machine or a person. These studies provide additional evidence for the Computers as Social Actors framework and advance our understanding of AI-Mediated Communication. The results also raise questions about transparency and deception related to AI use in interpersonal contexts.
Collapse
Affiliation(s)
- Daphne Weiss
- Neuroscience and Behavioral Biology, Emory University, Atlanta, Georgia, USA
| | - Sunny X Liu
- Department of Communication, Stanford University, Stanford, California, USA
| | - Hannah Mieczkowski
- Department of Communication, Stanford University, Stanford, California, USA
| | - Jeffrey T Hancock
- Department of Communication, Stanford University, Stanford, California, USA
| |
Collapse
|
10
|
Köbis NC, Doležalová B, Soraperra I. Fooled twice: People cannot detect deepfakes but think they can. iScience 2021; 24:103364. [PMID: 34820608 PMCID: PMC8602050 DOI: 10.1016/j.isci.2021.103364] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 07/28/2021] [Accepted: 10/25/2021] [Indexed: 11/17/2022] Open
Abstract
Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N = 210), we show that (1) people cannot reliably detect deepfakes and (2) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (3) people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and (4) they overestimate their own detection abilities. Together, these results suggest that people adopt a “seeing-is-believing” heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content. People cannot reliably detect deepfakes Raising awareness and financial incentives do not improve people's detection accuracy People tend to mistake deepfakes as authentic videos (rather than vice versa) People overestimate their own detection deepfake abilities
Collapse
Affiliation(s)
- Nils C Köbis
- Center for Humans and Machines, Max Planck Institute for Human Development, 14195 Berlin, Germany
| | - Barbora Doležalová
- Amsterdam School of Economics, University of Amsterdam, 1001 NJ Amsterdam, The Netherlands
| | - Ivan Soraperra
- Amsterdam School of Economics, University of Amsterdam, 1001 NJ Amsterdam, The Netherlands
| |
Collapse
|