1
|
Baqir A, Chen Y, Diaz-Diaz F, Kiyak S, Louf T, Morini V, Pansanella V, Torricelli M, Galeazzi A. Unveiling the drivers of active participation in social media discourse. Sci Rep 2025; 15:4906. [PMID: 39929878 PMCID: PMC11810998 DOI: 10.1038/s41598-025-88117-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2024] [Accepted: 01/24/2025] [Indexed: 02/13/2025] Open
Abstract
The emergence of new public forums in the form of online social media has introduced unprecedented challenges to public discourse, including polarization, misinformation, and the rise of echo chambers. Existing research has extensively examined these topics by focusing on the active actions performed by users, without accounting for the share of individuals who consume content without actively interacting with it. In contrast, this study incorporates passive consumption data to investigate the prevalence of active participation in online discourse. We introduce a metric to quantify the share of active engagement and analyze over 17 million pieces of content linked to a polarized Twitter debate to understand its relationship with several features of online environments, such as echo chambers, coordinated behavior, political bias, and source reliability. Our findings reveal a significant proportion of users who consume content without active interactions, underscoring the importance of considering also passive consumption proxies in the analysis of online debates. Furthermore, we found that increased active participation is primarily correlated with the presence of multimedia content and unreliable news sources, rather than with the ideological stance of the content producer, suggesting that active engagement is independent of echo chambers. Our work highlights the significance of passive consumption proxies for quantifying active engagement, which influences platform feed algorithms and, consequently, the development of online discussions. Moreover, it highlights the factors that may encourage active participation, which can be utilized to design more effective communication campaigns.
Collapse
Affiliation(s)
- Anees Baqir
- Department of Environmental Sciences, Informatics and Statistics, Ca' Foscari University of Venice, Venice, Italy
- Fondazione Bruno Kessler, Trento, Italy
- Northeastern University, London, UK
| | - Yijing Chen
- Department of Network and Data Science, Central European University, Wien, Austria
| | - Fernando Diaz-Diaz
- Institute for Cross-Disciplinary Physics and Complex Systems, IFISC (UIB-CSIC), Palma, Spain
| | - Sercan Kiyak
- Institute of Media Studies, KU Leuven, Leuven, Belgium
| | - Thomas Louf
- Institute for Cross-Disciplinary Physics and Complex Systems, IFISC (UIB-CSIC), Palma, Spain
- Fondazione Bruno Kessler, Trento, Italy
| | - Virginia Morini
- Department of Computer Science, University of Pisa, Pisa, Italy
| | - Valentina Pansanella
- Istituto di scienza e tecnologia dell'informazione "A. Faedo", Consiglio Nazionale delle Ricerche - CNR, Pisa, Italy
| | | | - Alessandro Galeazzi
- Department of Environmental Sciences, Informatics and Statistics, Ca' Foscari University of Venice, Venice, Italy.
- Department of Mathematics, University of Padova, Padua, Italy.
| |
Collapse
|
2
|
Darendeli A, Sun A, Tay WP. The geography of corporate fake news. PLoS One 2024; 19:e0301364. [PMID: 38630681 PMCID: PMC11023451 DOI: 10.1371/journal.pone.0301364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 03/08/2024] [Indexed: 04/19/2024] Open
Abstract
Although a rich academic literature examines the use of fake news by foreign actors for political manipulation, there is limited research on potential foreign intervention in capital markets. To address this gap, we construct a comprehensive database of (negative) fake news regarding U.S. firms by scraping prominent fact-checking sites. We identify the accounts that spread the news on Twitter (now X) and use machine-learning techniques to infer the geographic locations of these fake news spreaders. Our analysis reveals that corporate fake news is more likely than corporate non-fake news to be spread by foreign accounts. At the country level, corporate fake news is more likely to originate from African and Middle Eastern countries and tends to increase during periods of high geopolitical tension. At the firm level, firms operating in uncertain information environments and strategic industries are more likely to be targeted by foreign accounts. Overall, our findings provide initial evidence of foreign-originating misinformation in capital markets and thus have important policy implications.
Collapse
Affiliation(s)
- Alper Darendeli
- Nanyang Business School, Division of Accounting, Nanyang Technological University, Singapore, Singapore
| | - Aixin Sun
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| | - Wee Peng Tay
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
3
|
Pierri F, Luceri L, Chen E, Ferrara E. How does Twitter account moderation work? Dynamics of account creation and suspension on Twitter during major geopolitical events. EPJ DATA SCIENCE 2023; 12:43. [PMID: 37810187 PMCID: PMC10550859 DOI: 10.1140/epjds/s13688-023-00420-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023]
Abstract
Social media moderation policies are often at the center of public debate, and their implementation and enactment are sometimes surrounded by a veil of mystery. Unsurprisingly, due to limited platform transparency and data access, relatively little research has been devoted to characterizing moderation dynamics, especially in the context of controversial events and the platform activity associated with them. Here, we study the dynamics of account creation and suspension on Twitter during two global political events: Russia's invasion of Ukraine and the 2022 French Presidential election. Leveraging a large-scale dataset of 270M tweets shared by 16M users in multiple languages over several months, we identify peaks of suspicious account creation and suspension, and we characterize behaviors that more frequently lead to account suspension. We show how large numbers of accounts get suspended within days of their creation. Suspended accounts tend to mostly interact with legitimate users, as opposed to other suspicious accounts, making unwarranted and excessive use of reply and mention features, and sharing large amounts of spam and harmful content. While we are only able to speculate about the specific causes leading to a given account suspension, our findings contribute to shedding light on patterns of platform abuse and subsequent moderation during major events.
Collapse
Affiliation(s)
- Francesco Pierri
- Information Sciences Institute, University of Southern California, Los Angeles, USA
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy
| | - Luca Luceri
- Information Sciences Institute, University of Southern California, Los Angeles, USA
- Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland
| | - Emily Chen
- Information Sciences Institute, University of Southern California, Los Angeles, USA
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, USA
| | - Emilio Ferrara
- Information Sciences Institute, University of Southern California, Los Angeles, USA
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, USA
- Annenberg School of Communication and Journalism, University of Southern California, Los Angeles, USA
| |
Collapse
|
4
|
A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nat Hum Behav 2023; 7:74-101. [PMID: 36344657 PMCID: PMC9883171 DOI: 10.1038/s41562-022-01460-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 09/16/2022] [Indexed: 11/09/2022]
Abstract
One of today's most controversial and consequential issues is whether the global uptake of digital media is causally related to a decline in democracy. We conducted a systematic review of causal and correlational evidence (N = 496 articles) on the link between digital media use and different political variables. Some associations, such as increasing political participation and information consumption, are likely to be beneficial for democracy and were often observed in autocracies and emerging democracies. Other associations, such as declining political trust, increasing populism and growing polarization, are likely to be detrimental to democracy and were more pronounced in established democracies. While the impact of digital media on political systems depends on the specific variable and system in question, several variables show clear directions of associations. The evidence calls for research efforts and vigilance by governments and civil societies to better understand, design and regulate the interplay of digital media and democracy.
Collapse
|
5
|
Wilczek B, Thurman N. Contagious accuracy norm violation in political journalism: A cross-national investigation of how news media publish inaccurate political information. JOURNALISM (LONDON, ENGLAND) 2022; 23:2271-2288. [PMID: 36397804 PMCID: PMC9660276 DOI: 10.1177/14648849211032081] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
This study introduces social norm theory to mis- and disinformation research and investigates whether, how and under what conditions broadsheets' accuracy norm violation in political journalism becomes contagious and shifts other news media in a media market towards increasingly violating the accuracy norm in political journalism as well. Accuracy norm violation refers to the publication of inaccurate information. More specifically, the study compares Swiss and UK media markets and analyses Swiss and UK press councils' rulings between 2000 and 2019 that upheld complaints about accuracy norm violations in political journalism. The findings show that broadsheets increasingly violate the accuracy norm the closer election campaigns approach to election dates. They thereby drive other news media in a media market to increasingly violate the accuracy norm as well. However, this holds only for the UK media market but not for the Swiss media market. Therefore, the findings indicate that the higher expected benefits of accuracy norm violation that exist in media markets characterised by higher competition outweigh the higher expected costs of accuracy norm violation created by stronger press councils' sanctions, and, thereby, facilitate contagious accuracy norm violation in political journalism during election campaigns.
Collapse
Affiliation(s)
- Bartosz Wilczek
- Bartosz Wilczek, Department of Media and
Communication, Ludwig Maximilian University of Munich, Oettingenstraße 67,
Munich 80538, Germany.
| | | |
Collapse
|
6
|
Cinelli M, Etta G, Avalle M, Quattrociocchi A, Di Marco N, Valensise C, Galeazzi A, Quattrociocchi W. Conspiracy theories and social media platforms. Curr Opin Psychol 2022; 47:101407. [PMID: 35868169 DOI: 10.1016/j.copsyc.2022.101407] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 06/17/2022] [Accepted: 06/20/2022] [Indexed: 11/03/2022]
Abstract
Conspiracy theories proliferate online. We provide an overview of information consumption patterns related to conspiracy content on four mainstream social media platforms (Facebook, Twitter, YouTube, and Reddit), with a focus on niche ones. Opinion polarisation and echo chambers appear as pivotal elements of communication around conspiracy theories. A relevant role may also be played by the content moderation policies enforced by each social media platform. Banning contents or users from a social media could lead to a level of user segregation that goes beyond echo chambers and reaches the entire social media space, up to the formation of 'echo platforms'. The insurgence of echo platforms is a new online phenomenon that needs to be investigated as it could foster many dangerous phenomena that we observe online, including the spreading of conspiracy theories.
Collapse
Affiliation(s)
- Matteo Cinelli
- Sapienza University of Rome - Department of Computer Science Viale Regina Elena, 295, 00100 Rome, Italy
| | - Gabriele Etta
- Sapienza University of Rome - Department of Computer Science Viale Regina Elena, 295, 00100 Rome, Italy
| | - Michele Avalle
- Sapienza University of Rome - Department of Computer Science Viale Regina Elena, 295, 00100 Rome, Italy
| | - Alessandro Quattrociocchi
- Sapienza University of Rome - Department of Computer Science Viale Regina Elena, 295, 00100 Rome, Italy
| | - Niccolò Di Marco
- University of Florence, - Department of Mathematics and Computer Science, Viale Giovanni Battista Morgagni, 67/a, 50134 Florence, Italy
| | - Carlo Valensise
- Enrico Fermi Research Center, Piazza del Viminale 1, Rome 00184, Italy
| | - Alessandro Galeazzi
- Ca' Foscari, University of Venice - Department of Environmental Sciences, Informatics and Statistics, Via Torino 155, 30172 Mestre Italy
| | - Walter Quattrociocchi
- Sapienza University of Rome - Department of Computer Science Viale Regina Elena, 295, 00100 Rome, Italy.
| |
Collapse
|
7
|
Evkoski B, Pelicon A, Mozetič I, Ljubešić N, Kralj Novak P. Retweet communities reveal the main sources of hate speech. PLoS One 2022; 17:e0265602. [PMID: 35298556 PMCID: PMC8929563 DOI: 10.1371/journal.pone.0265602] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 03/02/2022] [Indexed: 11/19/2022] Open
Abstract
We address a challenging problem of identifying main sources of hate speech on Twitter. On one hand, we carefully annotate a large set of tweets for hate speech, and deploy advanced deep learning to produce high quality hate speech classification models. On the other hand, we create retweet networks, detect communities and monitor their evolution through time. This combined approach is applied to three years of Slovenian Twitter data. We report a number of interesting results. Hate speech is dominated by offensive tweets, related to political and ideological issues. The share of unacceptable tweets is moderately increasing with time, from the initial 20% to 30% by the end of 2020. Unacceptable tweets are retweeted significantly more often than acceptable tweets. About 60% of unacceptable tweets are produced by a single right-wing community of only moderate size. Institutional Twitter accounts and media accounts post significantly less unacceptable tweets than individual accounts. In fact, the main sources of unacceptable tweets are anonymous accounts, and accounts that were suspended or closed during the years 2018–2020.
Collapse
Affiliation(s)
- Bojan Evkoski
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
- Jozef Stefan International Postgraduate School, Ljubljana, Slovenia
| | - Andraž Pelicon
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
- Jozef Stefan International Postgraduate School, Ljubljana, Slovenia
| | - Igor Mozetič
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
- * E-mail:
| | - Nikola Ljubešić
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
- Faculty of Information and Communication Sciences, University of Ljubljana, Ljubljana, Slovenia
| | - Petra Kralj Novak
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
| |
Collapse
|
8
|
|
9
|
Evkoski B, Ljubešić N, Pelicon A, Mozetič I, Kralj Novak P. Evolution of topics and hate speech in retweet network communities. APPLIED NETWORK SCIENCE 2021; 6:96. [PMID: 34957317 PMCID: PMC8686097 DOI: 10.1007/s41109-021-00439-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Accepted: 12/10/2021] [Indexed: 06/14/2023]
Abstract
Twitter data exhibits several dimensions worth exploring: a network dimension in the form of links between the users, textual content of the tweets posted, and a temporal dimension as the time-stamped sequence of tweets and their retweets. In the paper, we combine analyses along all three dimensions: temporal evolution of retweet networks and communities, contents in terms of hate speech, and discussion topics. We apply the methods to a comprehensive set of all Slovenian tweets collected in the years 2018-2020. We find that politics and ideology are the prevailing topics despite the emergence of the Covid-19 pandemic. These two topics also attract the highest proportion of unacceptable tweets. Through time, the membership of retweet communities changes, but their topic distribution remains remarkably stable. Some retweet communities are strongly linked by external retweet influence and form super-communities. The super-community membership closely corresponds to the topic distribution: communities from the same super-community are very similar by the topic distribution, and communities from different super-communities are quite different in terms of discussion topics. However, we also find that even communities from the same super-community differ considerably in the proportion of unacceptable tweets they post.
Collapse
Affiliation(s)
- Bojan Evkoski
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
- Jozef Stefan International Postgraduate School, Ljubljana, Slovenia
| | - Nikola Ljubešić
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
- Faculty of Information and Communication Sciences, University of Ljubljana, Ljubljana, Slovenia
| | - Andraž Pelicon
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
- Jozef Stefan International Postgraduate School, Ljubljana, Slovenia
| | - Igor Mozetič
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
| | - Petra Kralj Novak
- Department of Knowledge Technologies, Jozef Stefan Institute, Ljubljana, Slovenia
- Central European University, Vienna, Austria
| |
Collapse
|
10
|
Abstract
The relationship between a subject’s ideological persuasion with the belief and spread of fake news is the object of our study. Departing from a left- vs. right-wing framework, a questionnaire sought to position subjects on this political-ideological spectrum and demanded them to evaluate five pro-left and pro-right fake and real news, totaling 20 informational products. The results show the belief and dissemination of (fake) news are related to the political ideology of the participants, with right-wing subjects exhibiting a greater tendency to accept fake news, regardless of whether it is pro-left or pro-right fake news. These findings contradict the confirmation bias and may suggest that a greater influence of factors such as age, the level of digital news literacy and psychological aspects in the judgment of fake news are at play. Older and less educated respondents indicated they believed and would disseminate fake news at greater rates. Regardless of the ideology they favor, the Portuguese attributed higher credibility to the sample’s real news, a fact that can be meaningful regarding the fight against disinformation in Portugal and elsewhere.
Collapse
|
11
|
Abstract
The rise of bots and their influence on social networks is a hot topic that has aroused the interest of many researchers. Despite the efforts to detect social bots, it is still difficult to distinguish them from legitimate users. Here, we propose a simple yet effective semi-supervised method that allows distinguishing between bots and legitimate users with high accuracy. The method learns a joint representation of social connections and interactions between users by leveraging graph-based representation learning. Then, on the proximity graph derived from user embeddings, a sample of bots is used as seeds for a label propagation algorithm. We demonstrate that when the label propagation is done according to pairwise account proximity, our method achieves
F
1 = 0.93, whereas other state-of-the-art techniques achieve
F
1 ≤ 0.87. By applying our method to a large dataset of retweets, we uncover the presence of different clusters of bots in the network of Twitter interactions. Interestingly, such clusters feature different degrees of integration with legitimate users. By analyzing the interactions produced by the different clusters of bots, our results suggest that a significant group of users was systematically exposed to content produced by bots and to interactions with bots, indicating the presence of a selective exposure phenomenon.
Collapse
Affiliation(s)
- Marcelo Mendoza
- Department of Informatics, Universidad Técnica Federico Santa María, Valparaíso, Chile
| | | | - Stefano Cresci
- Institute of Informatics and Telematics, IIT-CNR, Pisa, Italy
| |
Collapse
|
12
|
Ferrara E, Cresci S, Luceri L. Misinformation, manipulation, and abuse on social media in the era of COVID-19. JOURNAL OF COMPUTATIONAL SOCIAL SCIENCE 2020; 3:271-277. [PMID: 33251373 PMCID: PMC7680254 DOI: 10.1007/s42001-020-00094-5] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 10/23/2020] [Indexed: 05/19/2023]
Abstract
The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories, automation, and online harassment on the onset of the coronavirus outbreak. Articles in this collection adopt a diverse range of methods and techniques, and focus on the study of the narratives that fueled conspiracy theories, on the diffusion patterns of COVID-19 misinformation, on the global news sentiment, on hate speech and social bot interference, and on multimodal Chinese propaganda. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues. In turn, these crucial endeavors might anticipate a growing trend of studies where diverse theories, models, and techniques will be combined to tackle the different aspects of online misinformation, manipulation, and abuse.
Collapse
Affiliation(s)
- Emilio Ferrara
- University of Southern California, Los Angeles, CA 90007 USA
| | - Stefano Cresci
- Institute of Informatics and Telematics, National Research Council (IIT-CNR), 56124 Pisa, Italy
| | - Luca Luceri
- University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Manno, Switzerland
| |
Collapse
|
13
|
Wilczek B. Misinformation and herd behavior in media markets: A cross-national investigation of how tabloids' attention to misinformation drives broadsheets' attention to misinformation in political and business journalism. PLoS One 2020; 15:e0241389. [PMID: 33175883 PMCID: PMC7657564 DOI: 10.1371/journal.pone.0241389] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Accepted: 10/13/2020] [Indexed: 11/30/2022] Open
Abstract
This study develops and tests a theoretical framework, which draws on herd behavior literature and explains how and under what conditions tabloids’ attention to misinformation drives broadsheets’ attention to misinformation. More specifically, the study analyzes all cases of political and business misinformation in Switzerland and the U.K. between 2002 and 2018, which are selected based on corresponding Swiss and U.K. press councils’ rulings (N = 114). The findings show that during amplifying events (i.e., election campaigns and economic downturns) tabloids allocate more attention to political and business misinformation, which, in turn, drives broadsheets to allocate more attention to the misinformation as well–and especially if the misinformation serves broadsheets’ ideological goals. Moreover, the findings show differences between Swiss and U.K. media markets only in the case of business misinformation and suggest that the attention allocation process depends in particular on the strength of the amplifying event in a media market. Thereby, this study contributes to the understanding of how and under what conditions misinformation spreads in media markets.
Collapse
Affiliation(s)
- Bartosz Wilczek
- Faculty of Communication, Culture and Society, Università della Svizzera Italiana, Lugano, Switzerland
- * E-mail:
| |
Collapse
|
14
|
Lorenz-Spreen P, Lewandowsky S, Sunstein CR, Hertwig R. How behavioural sciences can promote truth, autonomy and democratic discourse online. Nat Hum Behav 2020; 4:1102-1109. [PMID: 32541771 DOI: 10.1038/s41562-020-0889-7] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 04/23/2020] [Indexed: 12/28/2022]
Abstract
Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions-nudging and boosting- that enlist these cues to redesign online environments for informed and autonomous choice.
Collapse
Affiliation(s)
- Philipp Lorenz-Spreen
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany.
| | - Stephan Lewandowsky
- School of Psychological Science and Cabot Institute, University of Bristol, Bristol, UK.,School of Psychological Science, University of Western Australia, Perth, Australia
| | | | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|