1
|
Klonsky ED. Campbell's Law Explains the Replication Crisis: Pre-Registration Badges Are History Repeating. Assessment 2024:10731911241253430. [PMID: 38783515 DOI: 10.1177/10731911241253430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Campbell's Law explains the replication crisis. In brief, useful tools such as hypotheses, p-values, and multi-study designs came to be viewed as indicators of strong science, and thus goals in and of themselves. Consequently, their use became distorted in unanticipated ways (e.g., hypothesizing after results were known [HARKing], p-Hacking, misuses of researcher degrees of freedom), and fragile findings proliferated. Pre-registration mandates are positioned as an antidote. However, I argue that such efforts, perhaps best exemplified by pre-registration badges (PRBs), are history repeating: Another useful tool has been converted into an indicator of strong science and a goal in and of itself. This, too, will distort its use and harm psychological science in unanticipated ways. For example, there is already evidence that papers seeking PRBs routinely violate the rules and spirit of pre-registration. I suggest that pre-registration mandates will (a) discourage optimal scientific practice, (b) exacerbate the file drawer problem, (c) encourage pre-registering after results are known (PRARKing), and (d) create false trust in fragile findings. I conclude that multiple design features can help support replicability (e.g., adequate sample size, valid measurement, robustness checks, pre-registration), none should be canonized, replication is the only arbiter of replicability, and the most important solution is sociocultural: to foster a field that reveres and reinforces robust science-just as we once revered and reinforced flashy but fragile science.
Collapse
|
2
|
Fournier V, Varet F. Conspiracy beliefs and intention to use conventional, complementary and alternative medicines: Two vignette studies. Br J Health Psychol 2024; 29:333-350. [PMID: 37880094 DOI: 10.1111/bjhp.12702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 09/08/2023] [Accepted: 10/05/2023] [Indexed: 10/27/2023]
Abstract
OBJECTIVE Conspiracy beliefs (CBs) can have substantial consequences on health behaviours by influencing both conventional and non-conventional medicine uptake. They can target powerful groups (i.e. upward CBs) or powerless groups (i.e. downward CBs). Considering their repercussions in oncology, it appears useful to understand how CBs are related to the intentions to use conventional, complementary and alternative medicines (CAM). DESIGN AND METHODS This paper includes two pre-registered online correlational studies on a general French population (Study 1 N = 248, recruited on social media Mage = 40.07, SDage = 14.78; 205 women, 41 men and 2 non-binaries; Study 2 N = 313, recruited on social media and Prolific, Mage = 28.91, SDage = 9.60; 154 women, 149 men and 10 non-binaries). We investigated the links between generic and chemotherapy-related CBs and intentions to use conventional, complementary and alternative medicines. Study 2 consisted of a conceptual replication of Study 1, considering the orientation of CBs. RESULTS Generic CBs and chemotherapy-related CBs appear strongly and positively correlated, negatively correlated with intentions to take conventional medicine and positively with intentions to take CAM. The link between generic CBs and medication intention is fully mediated by chemotherapy-related CBs. When distinguished, upward CBs are a stronger predictor of chemotherapy-related CBs than downward CBs. CONCLUSIONS The findings suggest that intentions to use medicine are strongly associated with CBs. This has several important implications for further research and practice, notably on the presence and effects of CBs on medication behaviours in cancer patients.
Collapse
Affiliation(s)
- Valentyn Fournier
- CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Université de Lille, Lille, France
- ULR 4072 - PSITEC - Psychologie: Interactions, Temps, Emotions, Cognition, Université de Lille, Lille, France
| | - Florent Varet
- Anthropo-Lab, ETHICS EA7446, Lille Catholic University, Lille, France
| |
Collapse
|
3
|
Mathur MB. P-hacking in meta-analyses: A formalization and new meta-analytic methods. Res Synth Methods 2024; 15:483-499. [PMID: 38273211 DOI: 10.1002/jrsm.1701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 11/28/2023] [Accepted: 12/14/2023] [Indexed: 01/27/2024]
Abstract
As traditionally conceived, publication bias arises from selection operating on a collection of individually unbiased estimates. A canonical form of such selection across studies (SAS) is the preferential publication of affirmative studies (i.e., those with significant, positive estimates) versus nonaffirmative studies (i.e., those with nonsignificant or negative estimates). However, meta-analyses can also be compromised by selection within studies (SWS), in which investigators "p-hack" results within their study to obtain an affirmative estimate. Published estimates can then be biased even conditional on affirmative status, which comprises the performance of existing methods that only consider SAS. We propose two new analysis methods that accommodate joint SAS and SWS; both analyze only the published nonaffirmative estimates. First, we propose estimating the underlying meta-analytic mean by fitting "right-truncated meta-analysis" (RTMA) to the published nonaffirmative estimates. This method essentially imputes the entire underlying distribution of population effects. Second, we propose conducting a standard meta-analysis of only the nonaffirmative studies (MAN); this estimate is conservative (negatively biased) under weakened assumptions. We provide an R package (phacking) and website (metabias.io). Our proposed methods supplement existing methods by assessing the robustness of meta-analyses to joint SAS and SWS.
Collapse
Affiliation(s)
- Maya B Mathur
- Quantitative Sciences Unit, Department of Medicine, Stanford University, Stanford, California, USA
| |
Collapse
|
4
|
Sandoval-Lentisco A, López-Nicolás R, Tortajada M, López-López JA, Sánchez-Meca J. Transparency in Cognitive Training Meta-analyses: A Meta-review. Neuropsychol Rev 2024:10.1007/s11065-024-09638-2. [PMID: 38639881 DOI: 10.1007/s11065-024-09638-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 03/10/2024] [Indexed: 04/20/2024]
Abstract
Meta-analyses often present flexibility regarding their inclusion criteria, outcomes of interest, statistical analyses, and assessments of the primary studies. For this reason, it is necessary to transparently report all the information that could impact the results. In this meta-review, we aimed to assess the transparency of meta-analyses that examined the benefits of cognitive training, given the ongoing controversy that exists in this field. Ninety-seven meta-analytic reviews were included, which examined a wide range of populations with different clinical conditions and ages. Regarding the reporting, information about the search of the studies, screening procedure, or data collection was detailed by most reviews. However, authors usually failed to report other aspects such as the specific meta-analytic parameters, the formula used to compute the effect sizes, or the data from primary studies that were used to compute the effect sizes. Although some of these practices have improved over the years, others remained the same. Moreover, examining the eligibility criteria of the reviews revealed a great heterogeneity in aspects such as the training duration, age cut-offs, or study designs that were considered. Preregistered meta-analyses often specified poorly how they would deal with the multiplicity of data or assess publication bias in their protocols, and some contained non-disclosed deviations in their eligibility criteria or outcomes of interests. The findings shown here, although they do not question the benefits of cognitive training, illustrate important aspects that future reviews must consider.
Collapse
Affiliation(s)
| | - Rubén López-Nicolás
- Department Basic Psychology and Methodology, University of Murcia, Murcia, Spain
| | - Miriam Tortajada
- Department Basic Psychology and Methodology, University of Murcia, Murcia, Spain
| | | | - Julio Sánchez-Meca
- Department Basic Psychology and Methodology, University of Murcia, Murcia, Spain
| |
Collapse
|
5
|
Prendergast G, Sindi A, Munro KJ. Pre-registration of audiology research studies: are actions following good intentions? Int J Audiol 2024; 63:226-228. [PMID: 36734861 DOI: 10.1080/14992027.2023.2171909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/10/2022] [Accepted: 01/13/2023] [Indexed: 02/04/2023]
Affiliation(s)
- Garreth Prendergast
- The Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, UK
| | - Aala Sindi
- The Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, UK
- Department of Speech-Language Pathology and Audiology, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Saudi Arabia
| | - Kevin J Munro
- The Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, UK
| |
Collapse
|
6
|
Varet F, Adam-Troian J, Bonetto E, Akinyemi A, Lantian A, Voisin D, Delouvée S. Experimental manipulation of uncanny feeling does not increase adherence to conspiracy theories. Scand J Psychol 2024; 65:144-156. [PMID: 37667647 DOI: 10.1111/sjop.12962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 07/12/2023] [Accepted: 08/10/2023] [Indexed: 09/06/2023]
Abstract
Research over the past decade has shown that endorsement of conspiracy theories (CTs) is shaped by motivated cognition processes. Accordingly, CTs are theorized to stem from compensatory processes, as individuals attempt to cope with existential threats (i.e., uncertainty, loss of control). Based on the meaning maintenance model, we investigated whether this compensatory effect could follow from epistemic threats in domains unrelated to CTs in the form of uncanniness. Feelings of uncanniness were experimentally manipulated through exposure to absurdist art and literature in a set of five studies, followed by a mini meta-analysis (Ntotal = 1,041). We conducted a final, preregistered sixth study (N = 266) manipulating uncanniness through autobiographical recall. No robust evidence for a compensatory effect was found. We discussed methodological and conceptual limitations of the meaning maintenance model, as well as boundary conditions under which conspiracy theories could have a compensatory function to deal with threats.
Collapse
Affiliation(s)
- Florent Varet
- Anthropo-Lab, ETHICS EA 7446, Université Catholique de Lille, Lille, France
| | | | - Eric Bonetto
- Aix Marseille University, PSYCLE, Aix-en-Provence, France
- InCIAM, Aix-en-Provence, France
| | - Alexis Akinyemi
- Laboratoire Parisien de Psychologie Sociale, EA 4386 (équipe PS2C), Nanterre, France
| | - Anthony Lantian
- Département de Psychologie, Laboratoire Parisien de Psychologie Sociale, UPL, Univ Paris Nanterre, Nanterre, France
| | - Dimitri Voisin
- C2S Laboratory, Department of Psychology, University of Reims Champagne-Ardenne, Reims, France
| | - Sylvain Delouvée
- Department of Psychology, LP3C-EA 1285, University Rennes, Rennes, France
| |
Collapse
|
7
|
Maier M, Bartoš F, Raihani N, Shanks DR, Stanley TD, Wagenmakers EJ, Harris AJL. Exploring open science practices in behavioural public policy research. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231486. [PMID: 38384774 PMCID: PMC10878814 DOI: 10.1098/rsos.231486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 01/19/2024] [Indexed: 02/23/2024]
Abstract
In their book 'Nudge: Improving Decisions About Health, Wealth and Happiness', Thaler & Sunstein (2009) argue that choice architectures are promising public policy interventions. This research programme motivated the creation of 'nudge units', government agencies which aim to apply insights from behavioural science to improve public policy. We closely examine a meta-analysis of the evidence gathered by two of the largest and most influential nudge units (DellaVigna & Linos (2022 Econometrica 90, 81-116 (doi:10.3982/ECTA18709))) and use statistical techniques to detect reporting biases. Our analysis shows evidence suggestive of selective reporting. We additionally evaluate the public pre-analysis plans from one of the two nudge units (Office of Evaluation Sciences). We identify several instances of excellent practice; however, we also find that the analysis plans and reporting often lack sufficient detail to evaluate (unintentional) reporting biases. We highlight several improvements that would enhance the effectiveness of the pre-analysis plans and reports as a means to combat reporting biases. Our findings and suggestions can further improve the evidence base for policy decisions.
Collapse
Affiliation(s)
- Maximilian Maier
- Department of Experimental Psychology, University College London, London, UK
| | - František Bartoš
- Department of Psychological Methods, University of Amsterdam, Amsterdam, The Netherlands
| | - Nichola Raihani
- Department of Experimental Psychology, University College London, London, UK
| | - David R. Shanks
- Department of Experimental Psychology, University College London, London, UK
| | - T. D. Stanley
- Deakin Laboratory for the Meta-Analysis of Research (DeLMAR), Deakin University, Burwood, Australia
- Department of Economics, Deakin University, Burwood, Australia
| | - Eric-Jan Wagenmakers
- Department of Psychological Methods, University of Amsterdam, Amsterdam, The Netherlands
| | - Adam J. L. Harris
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
8
|
Grimes DR. Region of Attainable Redaction, an extension of Ellipse of Insignificance analysis for gauging impacts of data redaction in dichotomous outcome trials. eLife 2024; 13:e93050. [PMID: 38284745 PMCID: PMC10871715 DOI: 10.7554/elife.93050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 01/23/2024] [Indexed: 01/30/2024] Open
Abstract
In biomedical science, it is a reality that many published results do not withstand deeper investigation, and there is growing concern over a replicability crisis in science. Recently, Ellipse of Insignificance (EOI) analysis was introduced as a tool to allow researchers to gauge the robustness of reported results in dichotomous outcome design trials, giving precise deterministic values for the degree of miscoding between events and non-events tolerable simultaneously in both control and experimental arms (Grimes, 2022). While this is useful for situations where potential miscoding might transpire, it does not account for situations where apparently significant findings might result from accidental or deliberate data redaction in either the control or experimental arms of an experiment, or from missing data or systematic redaction. To address these scenarios, we introduce Region of Attainable Redaction (ROAR), a tool that extends EOI analysis to account for situations of potential data redaction. This produces a bounded cubic curve rather than an ellipse, and we outline how this can be used to identify potential redaction through an approach analogous to EOI. Applications are illustrated, and source code, including a web-based implementation that performs EOI and ROAR analysis in tandem for dichotomous outcome trials is provided.
Collapse
Affiliation(s)
- David Robert Grimes
- School of Medicine, Trinity College DublinDublinIreland
- School of Physical Sciences, Dublin City UniversityDublinIreland
| |
Collapse
|
9
|
Hardwicke TE, Vazire S. Transparency Is Now the Default at Psychological Science. Psychol Sci 2023:9567976231221573. [PMID: 38150599 DOI: 10.1177/09567976231221573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023] Open
|
10
|
Thibault RT, Amaral OB, Argolo F, Bandrowski AE, Davidson AR, Drude NI. Open Science 2.0: Towards a truly collaborative research ecosystem. PLoS Biol 2023; 21:e3002362. [PMID: 37856538 PMCID: PMC10617723 DOI: 10.1371/journal.pbio.3002362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Revised: 10/31/2023] [Indexed: 10/21/2023] Open
Abstract
Conversations about open science have reached the mainstream, yet many open science practices such as data sharing remain uncommon. Our efforts towards openness therefore need to increase in scale and aim for a more ambitious target. We need an ecosystem not only where research outputs are openly shared but also in which transparency permeates the research process from the start and lends itself to more rigorous and collaborative research. To support this vision, this Essay provides an overview of a selection of open science initiatives from the past 2 decades, focusing on methods transparency, scholarly communication, team science, and research culture, and speculates about what the future of open science could look like. It then draws on these examples to provide recommendations for how funders, institutions, journals, regulators, and other stakeholders can create an environment that is ripe for improvement.
Collapse
Affiliation(s)
- Robert T. Thibault
- 1 Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, Unites States of America
| | - Olavo B. Amaral
- Institute of Medical Biochemistry Leopoldo de Meis, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | | | - Anita E. Bandrowski
- FAIR Data Informatics Lab, Department of Neuroscience, UCSD, San Diego, California, United States of America
- SciCrunch Inc., San Diego, California, United States of America
| | - Alexandra R, Davidson
- Institute for Evidence-Based Health Care, Bond University, Robina, Australia
- Faculty of Health Science and Medicine, Bond University, Robina, Australia
| | - Natascha I. Drude
- Berlin Institute of Health (BIH) at Charité, BIH QUEST Center for Responsible Research, Berlin, Germany
| |
Collapse
|
11
|
van Elk M, Fried EI. History repeating: guidelines to address common problems in psychedelic science. Ther Adv Psychopharmacol 2023; 13:20451253231198466. [PMID: 37766730 PMCID: PMC10521293 DOI: 10.1177/20451253231198466] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 08/11/2023] [Indexed: 09/29/2023] Open
Abstract
Research in the last decade has expressed considerable optimism about the clinical potential of psychedelics for the treatment of mental disorders. This optimism is reflected in an increase in research papers, investments by pharmaceutical companies, patents, media coverage, as well as political and legislative changes. However, psychedelic science is facing serious challenges that threaten the validity of core findings and raise doubt regarding clinical efficacy and safety. In this paper, we introduce the 10 most pressing challenges, grouped into easy, moderate, and hard problems. We show how these problems threaten internal validity (treatment effects are due to factors unrelated to the treatment), external validity (lack of generalizability), construct validity (unclear working mechanism), or statistical conclusion validity (conclusions do not follow from the data and methods). These problems tend to co-occur in psychedelic studies, limiting conclusions that can be drawn about the safety and efficacy of psychedelic therapy. We provide a roadmap for tackling these challenges and share a checklist that researchers, journalists, funders, policymakers, and other stakeholders can use to assess the quality of psychedelic science. Addressing today's problems is necessary to find out whether the optimism regarding the therapeutic potential of psychedelics has been warranted and to avoid history repeating itself.
Collapse
Affiliation(s)
- Michiel van Elk
- Cognitive Psychology Unit, Institute of Psychology, Leiden University, PO Box 9555, Leiden 2300 RB, The Netherlands
| | - Eiko I. Fried
- Clinical Psychology Unit, Institute of Psychology, Leiden University, Leiden, The Netherlands
| |
Collapse
|
12
|
Ferguson J, Littman R, Christensen G, Paluck EL, Swanson N, Wang Z, Miguel E, Birke D, Pezzuto JH. Survey of open science practices and attitudes in the social sciences. Nat Commun 2023; 14:5401. [PMID: 37669942 PMCID: PMC10480148 DOI: 10.1038/s41467-023-41111-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 08/21/2023] [Indexed: 09/07/2023] Open
Abstract
Open science practices such as posting data or code and pre-registering analyses are increasingly prescribed and debated in the applied sciences, but the actual popularity and lifetime usage of these practices remain unknown. This study provides an assessment of attitudes toward, use of, and perceived norms regarding open science practices from a sample of authors published in top-10 (most-cited) journals and PhD students in top-20 ranked North American departments from four major social science disciplines: economics, political science, psychology, and sociology. We observe largely favorable private attitudes toward widespread lifetime usage (meaning that a researcher has used a particular practice at least once) of open science practices. As of 2020, nearly 90% of scholars had ever used at least one such practice. Support for posting data or code online is higher (88% overall support and nearly at the ceiling in some fields) than support for pre-registration (58% overall). With respect to norms, there is evidence that the scholars in our sample appear to underestimate the use of open science practices in their field. We also document that the reported lifetime prevalence of open science practices increased from 49% in 2010 to 87% a decade later.
Collapse
Affiliation(s)
- Joel Ferguson
- University of California, Berkeley, Department of Agricultural and Resource Economics, Berkeley, California, USA
| | - Rebecca Littman
- University of Illinois Chicago, Department of Psychology, Chicago, Illinois, USA
| | - Garret Christensen
- Federal Deposit Insurance Corporation, Washington, District of Columbia, USA
| | | | - Nicholas Swanson
- University of California, Berkeley, Department of Economics, Berkeley, California, USA
| | - Zenan Wang
- University of California, Berkeley, Department of Economics, Berkeley, California, USA
| | - Edward Miguel
- University of California, Berkeley, Department of Economics, Berkeley, California, USA.
| | - David Birke
- University of California, Berkeley, Department of Economics, Berkeley, California, USA
| | - John-Henry Pezzuto
- University of California, San Diego, Rady School of Management, La Jolla, California, USA
| |
Collapse
|
13
|
Keener SK, Kepes S, Torka AK. The trustworthiness of the cumulative knowledge in industrial/organizational psychology: The current state of affairs and a path forward. Acta Psychol (Amst) 2023; 239:104005. [PMID: 37625919 DOI: 10.1016/j.actpsy.2023.104005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 07/13/2023] [Accepted: 08/04/2023] [Indexed: 08/27/2023] Open
Abstract
The goal of industrial/organizational (IO) psychology, is to build and organize trustworthy knowledge about people-related phenomena in the workplace. Unfortunately, as with other scientific disciplines, our discipline may be experiencing a "crisis of confidence" stemming from the lack of reproducibility and replicability of many of our field's research findings, which would suggest that much of our research may be untrustworthy. If a scientific discipline's research is deemed untrustworthy, it can have dire consequences, including the withdraw of funding for future research. In this focal article, we review the current state of reproducibility and replicability in IO psychology and related fields. As part of this review, we discuss factors that make it less likely that research findings will be trustworthy, including the prevalence of scientific misconduct, questionable research practices (QRPs), and errors. We then identify some root causes of these issues and provide several potential remedies. In particular, we highlight the need for improved research methods and statistics training as well as a re-alignment of the incentive structure in academia. To accomplish this, we advocate for changes in the reward structure, improvements to the peer review process, and the implementation of open science practices. Overall, addressing the current "crisis of confidence" in IO psychology requires individual researchers, academic institutions, and publishers to embrace system-wide change.
Collapse
Affiliation(s)
- Sheila K Keener
- Department of Management, Old Dominion University, Norfolk, VA, United States of America.
| | - Sven Kepes
- Department of Management and Entrepreneurship, Virginia Commonwealth University, Richmond, VA, United States of America.
| | - Ann-Kathrin Torka
- Department of Social, Work, and Organizational Psychology, TU Dortmund University, Dortmund, Germany.
| |
Collapse
|
14
|
Uygun Tunç D, Tunç MN, Eper ZB. Is Open Science Neoliberal? PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1047-1061. [PMID: 36476075 PMCID: PMC10475209 DOI: 10.1177/17456916221114835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
The scientific-reform movement, frequently referred to as open science, has the potential to substantially reshape the nature of the scientific activity. For this reason, its sociopolitical antecedents and consequences deserve serious scholarly attention. In a recently formed literature that professes to meet this need, it has been widely argued that the movement is neoliberal. However, for two reasons it is hard to justify this widescale attribution: First, the critics mistakenly represent the movement as a monolithic structure, and second, the critics' arguments associating the movement with neoliberalism because of the movement's (a) preferential focus on methodological issues, (b) underlying philosophy of science, and (c) allegedly promarket ideological proclivities reflected in the methodology and science-policy proposals do not hold under closer scrutiny. These shortcomings show a lack of sufficient engagement with the reform literature. What is needed is more nuanced accounts of the sociopolitical underpinnings of scientific reform. To address this need, we propose a model for the analysis of reform proposals, which represents scientific methodology, axiology, science policy, and ideology as interconnected but relatively distinct domains, and thus allows for recognizing the divergent tendencies in the movement and the uniqueness of particular proposals.
Collapse
|
15
|
Thompson WH, Skau S. On the scope of scientific hypotheses. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230607. [PMID: 37650069 PMCID: PMC10465209 DOI: 10.1098/rsos.230607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 08/04/2023] [Indexed: 09/01/2023]
Abstract
Hypotheses are frequently the starting point when undertaking the empirical portion of the scientific process. They state something that the scientific process will attempt to evaluate, corroborate, verify or falsify. Their purpose is to guide the types of data we collect, analyses we conduct, and inferences we would like to make. Over the last decade, metascience has advocated for hypotheses being in preregistrations or registered reports, but how to formulate these hypotheses has received less attention. Here, we argue that hypotheses can vary in specificity along at least three independent dimensions: the relationship, the variables, and the pipeline. Together, these dimensions form the scope of the hypothesis. We demonstrate how narrowing the scope of a hypothesis in any of these three ways reduces the hypothesis space and that this reduction is a type of novelty. Finally, we discuss how this formulation of hypotheses can guide researchers to formulate the appropriate scope for their hypotheses and should aim for neither too broad nor too narrow a scope. This framework can guide hypothesis-makers when formulating their hypotheses by helping clarify what is being tested, chaining results to previous known findings, and demarcating what is explicitly tested in the hypothesis.
Collapse
Affiliation(s)
- William Hedley Thompson
- Department of Applied Information Technology, University of Gothenburg, Gothenburg, Sweden
- Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Simon Skau
- Department of Pedagogical, Curricular and Professional Studies, Faculty of Education, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
16
|
Mathur MB, Fox MP. Toward Open and Reproducible Epidemiology. Am J Epidemiol 2023; 192:658-664. [PMID: 36627249 PMCID: PMC10089067 DOI: 10.1093/aje/kwad007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 12/08/2022] [Accepted: 01/09/2023] [Indexed: 01/12/2023] Open
Abstract
Starting in the 2010s, researchers in the experimental social sciences rapidly began to adopt increasingly open and reproducible scientific practices. These practices include publicly sharing deidentified data when possible, sharing analytical code, and preregistering study protocols. Empirical evidence from the social sciences suggests such practices are feasible, can improve analytical reproducibility, and can reduce selective reporting. In academic epidemiology, adoption of open-science practices has been slower than in the social sciences (with some notable exceptions, such as registering clinical trials). Epidemiologic studies are often large, complex, conceived after data have already been collected, and difficult to replicate directly by collecting new data. These characteristics make it especially important to ensure their integrity and analytical reproducibility. Open-science practices can also pay immediate dividends to researchers' own work by clarifying scientific reasoning and encouraging well-documented, organized workflows. We consider how established epidemiologists and early-career researchers alike can help midwife a culture of open science in epidemiology through their research practices, mentorship, and editorial activities.
Collapse
Affiliation(s)
- Maya B Mathur
- Correspondence to Dr. Maya B. Mathur, Quantitative Sciences Unit, 3180 Porter Drive, Palo Alto, CA 94304 (e-mail: )
| | | |
Collapse
|
17
|
Stefan AM, Schönbrodt FD. Big little lies: a compendium and simulation of p-hacking strategies. ROYAL SOCIETY OPEN SCIENCE 2023; 10:220346. [PMID: 36778954 PMCID: PMC9905987 DOI: 10.1098/rsos.220346] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 01/11/2023] [Indexed: 06/18/2023]
Abstract
In many research fields, the widespread use of questionable research practices has jeopardized the credibility of scientific results. One of the most prominent questionable research practices is p-hacking. Typically, p-hacking is defined as a compound of strategies targeted at rendering non-significant hypothesis testing results significant. However, a comprehensive overview of these p-hacking strategies is missing, and current meta-scientific research often ignores the heterogeneity of strategies. Here, we compile a list of 12 p-hacking strategies based on an extensive literature review, identify factors that control their level of severity, and demonstrate their impact on false-positive rates using simulation studies. We also use our simulation results to evaluate several approaches that have been proposed to mitigate the influence of questionable research practices. Our results show that investigating p-hacking at the level of strategies can provide a better understanding of the process of p-hacking, as well as a broader basis for developing effective countermeasures. By making our analyses available through a Shiny app and R package, we facilitate future meta-scientific research aimed at investigating the ramifications of p-hacking across multiple strategies, and we hope to start a broader discussion about different manifestations of p-hacking in practice.
Collapse
Affiliation(s)
- Angelika M. Stefan
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Department of Psychology, Universität der Bundeswehr München, München, Germany
| | - Felix D. Schönbrodt
- Department of Psychology, Ludwig-Maximilians-Universität München, Munchen, Germany
| |
Collapse
|
18
|
Spitzer L, Mueller S. Registered report: Survey on attitudes and experiences regarding preregistration in psychological research. PLoS One 2023; 18:e0281086. [PMID: 36928664 PMCID: PMC10019715 DOI: 10.1371/journal.pone.0281086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/16/2023] [Indexed: 03/18/2023] Open
Abstract
BACKGROUND Preregistration, the open science practice of specifying and registering details of a planned study prior to knowing the data, increases the transparency and reproducibility of research. Large-scale replication attempts for psychological results yielded shockingly low success rates and contributed to an increasing demand for open science practices among psychologists. However, preregistering one's studies is still not the norm in the field. Here, we conducted a study to explore possible reasons for this discrepancy. METHODS In a mixed-methods approach, we conducted an online survey assessing attitudes, motivations, and perceived obstacles with respect to preregistration. Respondents (N = 289) were psychological researchers that were recruited through their publications on Web of Science, PubMed, PSYNDEX, and PsycInfo, and preregistrations on OSF Registries. Based on the theory of planned behavior, we predicted that positive attitudes (moderated by the perceived importance of preregistration) as well as a favorable subjective norm and higher perceived behavioral control positively influence researchers' intention to preregister (directional hypothesis 1). Furthermore, we expected an influence of research experience on attitudes and perceived motivations and obstacles regarding preregistration (non-directional hypothesis 2). We analyzed these hypotheses with multiple regression models and included preregistration experience as a control variable. RESULTS Researchers' attitudes, subjective norms, perceived behavioral control, and the perceived importance of preregistration significantly predicted researchers' intention to use preregistration in the future (see hypothesis 1). Research experience influenced both researchers' attitudes and their perception of motivations to preregister, but not the perception of obstacles (see hypothesis 2). Descriptive reports on researchers' attitudes, motivations and obstacles regarding preregistration are provided. DISCUSSION Many researchers had already preregistered and had a rather positive attitude toward preregistration. Nevertheless, several obstacles were identified that may be addressed to improve and foster preregistration.
Collapse
Affiliation(s)
- Lisa Spitzer
- Leibniz Institute for Psychology (ZPID), Trier, Germany
- * E-mail:
| | | |
Collapse
|
19
|
Sarafoglou A, Hoogeveen S, Wagenmakers EJ. Comparing Analysis Blinding With Preregistration in the Many-Analysts Religion Project. ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2023. [DOI: 10.1177/25152459221128319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
In psychology, preregistration is the most widely used method to ensure the confirmatory status of analyses. However, the method has disadvantages: Not only is it perceived as effortful and time-consuming, but reasonable deviations from the analysis plan demote the status of the study to exploratory. An alternative to preregistration is analysis blinding, in which researchers develop their analysis on an altered version of the data. In this experimental study, we compare the reported efficiency and convenience of the two methods in the context of the Many-Analysts Religion Project. In this project, 120 teams answered the same research questions on the same data set, either preregistering their analysis ( n = 61) or using analysis blinding ( n = 59). Our results provide strong evidence (Bayes factor [BF] = 71.40) for the hypothesis that analysis blinding leads to fewer deviations from the analysis plan, and if teams deviated, they did so on fewer aspects. Contrary to our hypothesis, we found strong evidence (BF = 13.19) that both methods required approximately the same amount of time. Finally, we found no and moderate evidence on whether analysis blinding was perceived as less effortful and frustrating, respectively. We conclude that analysis blinding does not mean less work, but researchers can still benefit from the method because they can plan more appropriate analyses from which they deviate less frequently.
Collapse
|
20
|
Hardwicke TE, Wagenmakers EJ. Reducing bias, increasing transparency and calibrating confidence with preregistration. Nat Hum Behav 2023; 7:15-26. [PMID: 36707644 DOI: 10.1038/s41562-022-01497-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 11/09/2022] [Indexed: 01/29/2023]
Abstract
Flexibility in the design, analysis and interpretation of scientific studies creates a multiplicity of possible research outcomes. Scientists are granted considerable latitude to selectively use and report the hypotheses, variables and analyses that create the most positive, coherent and attractive story while suppressing those that are negative or inconvenient. This creates a risk of bias that can lead to scientists fooling themselves and fooling others. Preregistration involves declaring a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known. Preregistration (1) reduces the risk of bias by encouraging outcome-independent decision-making and (2) increases transparency, enabling others to assess the risk of bias and calibrate their confidence in research outcomes. In this Perspective, we briefly review the historical evolution of preregistration in medicine, psychology and other domains, clarify its pragmatic functions, discuss relevant meta-research, and provide recommendations for scientists and journal editors.
Collapse
Affiliation(s)
- Tom E Hardwicke
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands.
| | | |
Collapse
|
21
|
Rubin M, Donkin C. Exploratory hypothesis tests can be more compelling than confirmatory hypothesis tests. PHILOSOPHICAL PSYCHOLOGY 2022. [DOI: 10.1080/09515089.2022.2113771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Mark Rubin
- Department of Psychology, Durham University, Durham, UK
| | - Chris Donkin
- Faculty of Psychology and Educational Sciences, Ludwig Maximilian University of Munich, Munich, Germany
| |
Collapse
|
22
|
Schneider J, Rosman T, Kelava A, Merk S. Do Open-Science Badges Increase Trust in Scientists Among Undergraduates, Scientists, and the Public? Psychol Sci 2022; 33:1588-1604. [PMID: 36001881 DOI: 10.1177/09567976221097499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
In three experimental studies, we investigated whether badges for open-science practices have the potential to affect trust in scientists and topic-specific epistemic beliefs by student teachers (n = 270), social scientists (n = 250), or the public (n = 257), all of whom were at least 16 years old. Furthermore, we analyzed the moderating role of epistemic beliefs for badges and trust. Each participant was randomly assigned to two of three conditions: badges awarded, badges not awarded, and no badges (control). In all samples, our Bayesian analyses indicated that badges influence trust as expected, with one exception in the public sample: An additional positive effect of awarded badges (compared with no badges) was not supported. For students and scientists, we found evidence for the relation of badges and epistemic beliefs as well as epistemic beliefs and trust. Further, we found evidence for the absence of moderation by epistemic beliefs.
Collapse
Affiliation(s)
| | - Tom Rosman
- Department of Research Literacy and User Friendly Research Support, Leibniz Institute for Psychology Information
| | | | - Samuel Merk
- Institute for School and Instructional Development in Primary and Secondary Education, University of Education Karlsruhe
| |
Collapse
|
23
|
Muñoz-Tamayo R, Nielsen BL, Gagaoua M, Gondret F, Krause ET, Morgavi DP, Olsson IAS, Pastell M, Taghipoor M, Tedeschi L, Veissier I, Nawroth C. Seven steps to enhance Open Science practices in animal science. PNAS NEXUS 2022; 1:pgac106. [PMID: 36741429 PMCID: PMC9896936 DOI: 10.1093/pnasnexus/pgac106] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 06/30/2022] [Indexed: 04/14/2023]
Abstract
The Open Science movement aims at ensuring accessibility, reproducibility, and transparency of research. The adoption of Open Science practices in animal science, however, is still at an early stage. To move ahead as a field, we here provide seven practical steps to embrace Open Science in animal science. We hope that this paper contributes to the shift in research practices of animal scientists towards open, reproducible, and transparent science, enabling the field to gain additional public trust and deal with future challenges to guarantee reliable research. Although the paper targets primarily animal science researchers, the steps discussed here are also applicable to other research domains.
Collapse
Affiliation(s)
- Rafael Muñoz-Tamayo
- INRAE, AgroParisTech, Université Paris-Saclay, UMR Modélisation Systémique Appliquée aux Ruminants, 75005 Paris, France
| | - Birte L Nielsen
- Universities Federation for Animal Welfare (UFAW), The Old School, Brewhouse Hill, Wheathampstead, Hertfordshire AL4 8AN, UK
| | | | | | - E Tobias Krause
- Institute of Animal Welfare and Animal Husbandry, Friedrich-Loeffler-Institut, Dörnbergstr. 25/27, 29223 Celle, Germany
| | - Diego P Morgavi
- Université Clermont Auvergne, INRAE, VetAgro Sup, UMR Herbivores, F-63122 Saint-Genes-Champanelle, France
| | - I Anna S Olsson
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Rua Alfredo Allen 208, 4200-180 Porto, Portugal
| | - Matti Pastell
- Natural Resources Institute Finland (Luke), Production Systems, Latokartanonkaari 9, FI-00790 Helsinki, Finland
| | - Masoomeh Taghipoor
- INRAE, AgroParisTech, Université Paris-Saclay, UMR Modélisation Systémique Appliquée aux Ruminants, 75005 Paris, France
| | - Luis Tedeschi
- Department of Animal Science, Texas A&M University, College Station, TX 77843-2471, USA
| | - Isabelle Veissier
- Université Clermont Auvergne, INRAE, VetAgro Sup, UMR Herbivores, F-63122 Saint-Genes-Champanelle, France
| | - Christian Nawroth
- Institute of Behavioural Physiology, Research Institute for Farm Animal Biology (FBN), Wilhelm-Stahl-Allee 2, 18196 Dummerstorf, Germany
| |
Collapse
|
24
|
Discrepancy review: a feasibility study of a novel peer review intervention to reduce undisclosed discrepancies between registrations and publications. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220142. [PMID: 35911195 PMCID: PMC9326291 DOI: 10.1098/rsos.220142] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Undisclosed discrepancies often exist between study registrations and their associated publications. Discrepancies can increase risk of bias, and when undisclosed, they disguise this increased risk of bias from readers. To remedy this issue, we developed an intervention called discrepancy review. We provided journals with peer reviewers specifically assigned to check for undisclosed discrepancies between registrations and manuscripts submitted to journals. We performed discrepancy review on 18 manuscripts submitted to Nicotine and Tobacco Research and three manuscripts submitted to the European Journal of Personality. We iteratively refined the discrepancy review process based on feedback from discrepancy reviewers, editors and authors. Authors addressed the majority of discrepancy reviewer comments, and there was no opposition to running a trial from authors, editors or discrepancy reviewers. Outcome measures for a trial of discrepancy review could include the presence of primary or secondary outcome discrepancies, whether publications that are not the primary report from a clinical trial registration are clearly described as such, whether registrations are permanent, and an overarching subjective assessment of the impact of discrepancies in published articles. We found that discrepancy review could feasibly be introduced as a regular practice at some journals interested in this process. A full trial of discrepancy review would be needed to evaluate its impact on reducing undisclosed discrepancies.
Collapse
|
25
|
Wender CLA, Manninen M, O’Connor PJ. The Effect of Chronic Exercise on Energy and Fatigue States: A Systematic Review and Meta-Analysis of Randomized Trials. Front Psychol 2022; 13:907637. [PMID: 35726269 PMCID: PMC9206544 DOI: 10.3389/fpsyg.2022.907637] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 05/06/2022] [Indexed: 12/03/2022] Open
Abstract
In this meta-analysis, we synthesized the results of randomized controlled trials of different exercise training interventions on participants’ feelings of fatigue, energy, and vitality. The search of studies was conducted using six databases as well as several other supplementary search strategies available before December 2021. The initial search generated over 3,600 articles with 81 studies (7,050 participants) and 172 effects meeting the inclusion criteria. We analyzed the effects from the studies using a meta-analytic multivariate model and considered the potential moderating effect of multiple variables. Our analysis revealed exercise to decrease the feelings of fatigue by a small effect size (g = −0.374; 95% CI [−0.521, −0.227]), increase energy by a small-to-moderate effect size (g = 0.415; 95% CI [0.252, 0.578]), and to increase the feeling of vitality by a moderate effect size (g = 0.537; 95% CI [0.404, 0.671]). All main results remained robust after several sensitivity analyses using different statistical estimators, and consideration of outlier and influential studies. Moreover, moderator analyses revealed significant effects of exercise intensity and intervention duration on fatigue, exercise intensity, and modality on energy, and participant health, exercise intensity modality, and exercise training location on vitality. We conclude that when groups adopt a moderate intensity exercise training program while participating in a randomized trial, compared to controls, this typically results in small-to-moderate average improvements in feelings of fatigue, energy, and vitality.
Collapse
Affiliation(s)
- Carly L. A. Wender
- Center for Traumatic Brain Injury Research, Kessler Foundation, East Hanover, NJ, United States
- Department of Physical Medicine and Rehabilitation, Rutgers New Jersey Medical School, Newark, NJ, United States
- *Correspondence: Carly L. A. Wender,
| | - Mika Manninen
- School of Health and Human Performance, Dublin City University, Dublin, Ireland
| | - Patrick J. O’Connor
- Exercise Psychology Laboratory, Department of Kinesiology, College of Education, University of Georgia, Athens, GA, United States
| |
Collapse
|
26
|
Giner-Sorolla R. Valedictory editorial. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2022. [DOI: 10.1016/j.jesp.2021.104271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
27
|
Scheel AM. Why most psychological research findings are not even wrong. INFANT AND CHILD DEVELOPMENT 2022. [DOI: 10.1002/icd.2295] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Affiliation(s)
- Anne M. Scheel
- Human‐Technology Interaction Group Eindhoven University of Technology Eindhoven The Netherlands
| |
Collapse
|