1
|
Linden AH, Pollet TV, Hönekopp J. Publication bias in psychology: A closer look at the correlation between sample size and effect size. PLoS One 2024; 19:e0297075. [PMID: 38359021 PMCID: PMC10868788 DOI: 10.1371/journal.pone.0297075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 12/26/2023] [Indexed: 02/17/2024] Open
Abstract
Previously observed negative correlations between sample size and effect size (n-ES correlation) in psychological research have been interpreted as evidence for publication bias and related undesirable biases. Here, we present two studies aimed at better understanding to what extent negative n-ES correlations reflect such biases or might be explained by unproblematic adjustments of sample size to expected effect sizes. In Study 1, we analysed n-ES correlations in 150 meta-analyses from cognitive, organizational, and social psychology and in 57 multiple replications, which are free from relevant biases. In Study 2, we used a random sample of 160 psychology papers to compare the n-ES correlation for effects that are central to these papers and effects selected at random from these papers. n-ES correlations proved inconspicuous in meta-analyses. In line with previous research, they do not suggest that publication bias and related biases have a strong impact on meta-analyses in psychology. A much higher n-ES correlation emerged for publications' focal effects. To what extent this should be attributed to publication bias and related biases remains unclear.
Collapse
Affiliation(s)
- Audrey Helen Linden
- Centre for Research in Autism and Education (CRAE) Department of Psychology, University College London, London, United Kingdom
- Department of Psychology and Counselling, The Open University, Walton, United Kingdom
| | - Thomas V. Pollet
- Department of Psychology, Northumbria University, Northumbria, United Kingdom
| | - Johannes Hönekopp
- Department of Psychology, Northumbria University, Northumbria, United Kingdom
| |
Collapse
|
2
|
Baumeister RF, Tice DM, Bushman BJ. A Review of Multisite Replication Projects in Social Psychology: Is It Viable to Sustain Any Confidence in Social Psychology's Knowledge Base? Perspect Psychol Sci 2023; 18:912-935. [PMID: 36442681 DOI: 10.1177/17456916221121815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Multisite (multilab/many-lab) replications have emerged as a popular way of verifying prior research findings, but their record in social psychology has prompted distrust of the field and a sense of crisis. We review all 36 multisite social-psychology replications (plus three articles reporting multiple ministudies). We start by assuming that both the original and the multisite replications were conducted in honest and diligent fashion, despite often yielding different conclusions. Four of the 36 (11%) were clearly successful in terms of providing significant support for the original hypothesis, and five others (14%) had mixed results. The remaining 27 (75%) were failures. Multiple explanations for the generally poor record of replications are considered, including the possibility that the original hypothesis was wrong; operational failure; low engagement of participants; and bias toward failure. The relevant evidence is assessed as well. There was evidence for each of the possibilities listed above, with low engagement emerging as a widespread problem (reflected in high rates of discarded data and weak manipulation checks). The few procedures with actual interpersonal interaction fared much better than others. We discuss implications in relation to manipulation checks, effect sizes, and impact on the field and offer recommendations for improving future multisite projects.
Collapse
|
3
|
Billingsley J, Forster DE, Russell VM, Smith A, Burnette JL, Ohtsubo Y, Lieberman D, McCullough ME. Perceptions of relationship value and exploitation risk mediate the effects of transgressors' post-harm communications upon forgiveness. EVOL HUM BEHAV 2023. [DOI: 10.1016/j.evolhumbehav.2023.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
|
4
|
Fraley RC, Chong JY, Baacke KA, Greco AJ, Guan H, Vazire S. Journal N-Pact Factors From 2011 to 2019: Evaluating the Quality of Social/Personality Journals With Respect to Sample Size and Statistical Power. Advances in Methods and Practices in Psychological Science 2022. [DOI: 10.1177/25152459221120217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Scholars and institutions commonly use impact factors to evaluate the quality of empirical research. However, a number of findings published in journals with high impact factors have failed to replicate, suggesting that impact alone may not be an accurate indicator of quality. Fraley and Vazire proposed an alternative index, the N-pact factor, which indexes the median sample size of published studies, providing a narrow but relevant indicator of research quality. In the present research, we expand on the original report by examining the N-pact factor of social/personality-psychology journals between 2011 and 2019, incorporating additional journals and accounting for study design (i.e., between persons, repeated measures, and mixed). There was substantial variation in the sample sizes used in studies published in different journals. Journals that emphasized personality processes and individual differences had larger N-pact factors than journals that emphasized social-psychological processes. Moreover, N-pact factors were largely independent of traditional markers of impact. Although the majority of journals in 2011 published studies that were not well powered to detect an effect of ρ = .20, this situation had improved considerably by 2019. In 2019, eight of the nine journals we sampled published studies that were, on average, powered at 80% or higher to detect such an effect. After decades of unheeded warnings from methodologists about the dangers of small-sample designs, the field of social/personality psychology has begun to use larger samples. We hope the N-pact factor will be supplemented by other indices that can be used as alternatives to improve further the evaluation of research.
Collapse
Affiliation(s)
- R. Chris Fraley
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, Illinois
| | - Jia Y. Chong
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, Illinois
| | - Kyle A. Baacke
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, Illinois
| | - Anthony J. Greco
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, Illinois
| | - Hanxiong Guan
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, Illinois
| | - Simine Vazire
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Australia
| |
Collapse
|
5
|
|
6
|
Nosek BA, Hardwicke TE, Moshontz H, Allard A, Corker KS, Dreber A, Fidler F, Hilgard J, Struhl MK, Nuijten MB, Rohrer JM, Romero F, Scheel AM, Scherer LD, Schönbrodt FD, Vazire S. Replicability, Robustness, and Reproducibility in Psychological Science. Annu Rev Psychol 2021; 73:719-748. [PMID: 34665669 DOI: 10.1146/annurev-psych-020821-114157] [Citation(s) in RCA: 104] [Impact Index Per Article: 34.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Replication-an important, uncommon, and misunderstood practice-is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Brian A Nosek
- Department of Psychology, University of Virginia, Charlottesville, Virginia 22904, USA; .,Center for Open Science, Charlottesville, Virginia 22903, USA
| | - Tom E Hardwicke
- Department of Psychology, University of Amsterdam, 1012 ZA Amsterdam, The Netherlands
| | - Hannah Moshontz
- Addiction Research Center, University of Wisconsin-Madison, Madison, Wisconsin 53706, USA
| | - Aurélien Allard
- Department of Psychology, University of California, Davis, California 95616, USA
| | - Katherine S Corker
- Psychology Department, Grand Valley State University, Allendale, Michigan 49401, USA
| | - Anna Dreber
- Department of Economics, Stockholm School of Economics, 113 83 Stockholm, Sweden
| | - Fiona Fidler
- School of Biosciences, University of Melbourne, Parkville VIC 3010, Australia
| | - Joe Hilgard
- Department of Psychology, Illinois State University, Normal, Illinois 61790, USA
| | | | - Michèle B Nuijten
- Meta-Research Center, Tilburg University, 5037 AB Tilburg, The Netherlands
| | - Julia M Rohrer
- Department of Psychology, Leipzig University, 04109 Leipzig, Germany
| | - Felipe Romero
- Department of Theoretical Philosophy, University of Groningen, 9712 CP, The Netherlands
| | - Anne M Scheel
- Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Laura D Scherer
- University of Colorado Anschutz Medical Campus, Aurora, Colorado 80045, USA
| | - Felix D Schönbrodt
- Department of Psychology, Ludwig Maximilian University of Munich, 80539 Munich, Germany
| | - Simine Vazire
- School of Psychological Sciences, University of Melbourne, Parkville VIC 3052, Australia
| |
Collapse
|
7
|
Forster DE, Billingsley J, Burnette JL, Lieberman D, Ohtsubo Y, McCullough ME. Experimental evidence that apologies promote forgiveness by communicating relationship value. Sci Rep 2021; 11:13107. [PMID: 34162912 PMCID: PMC8222305 DOI: 10.1038/s41598-021-92373-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 05/12/2021] [Indexed: 12/31/2022] Open
Abstract
Robust evidence supports the importance of apologies for promoting forgiveness. Yet less is known about how apologies exert their effects. Here, we focus on their potential to promote forgiveness by way of increasing perceptions of relationship value. We used a method for directly testing these causal claims by manipulating both the independent variable and the proposed mediator. Namely, we use a 2 (Apology: yes vs. no) × 2 (Value: high vs. low) concurrent double-randomization design to test whether apologies cause forgiveness by affecting the same causal pathway as relationship value. In addition to supporting this causal claim, we also find that apologies had weaker effects on forgiveness when received from high-value transgressors, suggesting that the forgiveness-relevant information provided by apologies is redundant with relationship value. Taken together, these findings from a rigorous methodological paradigm help us parse out how apologies promote relationship repair.
Collapse
Affiliation(s)
- Daniel E Forster
- U.S. Combat Capabilities Development Command Army Research Laboratory, Aberdeen Proving Ground, MD, USA
- University of Miami, Coral Gables, FL, USA
| | - Joseph Billingsley
- University of Miami, Coral Gables, FL, USA
- North Carolina State University, Raleigh, NC, USA
| | | | | | - Yohsuke Ohtsubo
- Kobe University, Kobe, Japan
- University of Tokyo, Tokyo, Japan
| | - Michael E McCullough
- University of Miami, Coral Gables, FL, USA.
- University of California, San Diego, CA, USA.
| |
Collapse
|
8
|
Williams AJ, Botanov Y, Kilshaw RE, Wong RE, Sakaluk JK. Potentially harmful therapies: A meta-scientific review of evidential value. Clinical Psychology: Science and Practice 2021. [DOI: 10.1111/cpsp.12331] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Abstract
Heterogeneity emerges when multiple close or conceptual replications on the same subject produce results that vary more than expected from the sampling error. Here we argue that unexplained heterogeneity reflects a lack of coherence between the concepts applied and data observed and therefore a lack of understanding of the subject matter. Typical levels of heterogeneity thus offer a useful but neglected perspective on the levels of understanding achieved in psychological science. Focusing on continuous outcome variables, we surveyed heterogeneity in 150 meta-analyses from cognitive, organizational, and social psychology and 57 multiple close replications. Heterogeneity proved to be very high in meta-analyses, with powerful moderators being conspicuously absent. Population effects in the average meta-analysis vary from small to very large for reasons that are typically not understood. In contrast, heterogeneity was moderate in close replications. A newly identified relationship between heterogeneity and effect size allowed us to make predictions about expected heterogeneity levels. We discuss important implications for the formulation and evaluation of theories in psychology. On the basis of insights from the history and philosophy of science, we argue that the reduction of heterogeneity is important for progress in psychology and its practical applications, and we suggest changes to our collective research practice toward this end.
Collapse
|
10
|
Abstract
Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned - that is, that the two must refer to roughly the same set of hypothetical observations. Here, I argue that many applications of statistical inference in psychology fail to meet this basic condition. Focusing on the most widely used class of model in psychology - the linear mixed model - I explore the consequences of failing to statistically operationalize verbal hypotheses in a way that respects researchers' actual generalization intentions. I demonstrate that although the "random effect" formalism is used pervasively in psychology to model intersubject variability, few researchers accord the same treatment to other variables they clearly intend to generalize over (e.g., stimuli, tasks, or research sites). The under-specification of random effects imposes far stronger constraints on the generalizability of results than most researchers appreciate. Ignoring these constraints can dramatically inflate false-positive rates, and often leads researchers to draw sweeping verbal generalizations that lack a meaningful connection to the statistical quantities they are putatively based on. I argue that failure to take the alignment between verbal and statistical expressions seriously lies at the heart of many of psychology's ongoing problems (e.g., the replication crisis), and conclude with a discussion of several potential avenues for improvement.
Collapse
Affiliation(s)
- Tal Yarkoni
- Department of Psychology, The University of Texas at Austin, Austin, TX78712-1043,
| |
Collapse
|
11
|
Protzko J, Schooler JW. No relationship between researcher impact and replication effect: an analysis of five studies with 100 replications. PeerJ 2020; 8:e8014. [PMID: 32231868 PMCID: PMC7100597 DOI: 10.7717/peerj.8014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 10/09/2019] [Indexed: 11/30/2022] Open
Abstract
What explanation is there when teams of researchers are unable to successfully replicate already established 'canonical' findings? One suggestion that has been put forward, but left largely untested, is that those researchers who fail to replicate prior studies are of low 'expertise and diligence' and lack the skill necessary to successfully replicate the conditions of the original experiment. Here we examine the replication success of 100 scientists of differing 'expertise and diligence' who attempted to replicate five different studies. Using a bibliometric tool (h-index) as our indicator of researcher 'expertise and diligence', we examine whether this was predictive of replication success. Although there was substantial variability in replication success and in the h-factor of the investigators, we find no relationship between these variables. The present results provide no evidence for the hypothesis that systematic replications fail because of low 'expertise and diligence' among replicators.
Collapse
Affiliation(s)
- John Protzko
- Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, United States of America
| | - Jonathan W. Schooler
- Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, United States of America
| |
Collapse
|
12
|
Nzinga K, Rapp DN, Leatherwood C, Easterday M, Rogers LO, Gallagher N, Medin DL. Should social scientists be distanced from or engaged with the people they study? Proc Natl Acad Sci U S A 2018; 115:11435-41. [PMID: 30397119 DOI: 10.1073/pnas.1721167115] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
This commentary focuses on two important contrasts in the behavioral sciences: (i) default versus nondefault study populations, where default samples have been used disproportionately (for psychology, the default is undergraduates at major research universities), and (ii) the adoption of a distant versus close (engaged) attitude toward study samples. Previous research has shown a strong correlation between these contrasts, where default samples and distant perspectives are the norm. Distancing is sometimes seen as necessary for objectivity, and an engaged orientation is sometimes criticized as biased, advocacy research, especially if the researcher shares a social group membership with the study population (e.g., a black male researcher studying black male students). The lack of diversity in study samples has been paralleled by a lack of diversity in the researchers themselves. The salience of default samples and distancing in prior research creates potential (and presumed) risk factors for engaged research with nondefault samples. However, a distant perspective poses risks as well, and particularly so for research with nondefault populations. We suggest that engaged research can usefully encourage attention to the study context and taking the perspective of study samples, both of which are good research practices. More broadly, we argue that social and educational sciences need skepticism, interestedness, and engagement, not distancing. Fostering an engaged perspective in research may also foster a more diverse population of social scientists.
Collapse
|
13
|
Affiliation(s)
- Blakeley B. McShane
- Department of Marketing, Kellogg School of Management, Northwestern University, Evanston, IL
| | | | - Ulf Böckenholt
- Department of Marketing, Kellogg School of Management, Northwestern University, Evanston, IL
| | - Andrew Gelman
- Department of Statistics and Department of Political Science, Columbia University, New York, NY
| |
Collapse
|
14
|
|
15
|
McCarthy RJ, Skowronski JJ, Verschuere B, Meijer EH, Jim A, Hoogesteyn K, Orthey R, Acar OA, Aczel B, Bakos BE, Barbosa F, Baskin E, Bègue L, Ben-Shakhar G, Birt AR, Blatz L, Charman SD, Claesen A, Clay SL, Coary SP, Crusius J, Evans JR, Feldman N, Ferreira-Santos F, Gamer M, Gerlsma C, Gomes S, González-Iraizoz M, Holzmeister F, Huber J, Huntjens RJC, Isoni A, Jessup RK, Kirchler M, klein Selle N, Koppel L, Kovacs M, Laine T, Lentz F, Loschelder DD, Ludvig EA, Lynn ML, Martin SD, McLatchie NM, Mechtel M, Nahari G, Özdoğru AA, Pasion R, Pennington CR, Roets A, Rozmann N, Scopelliti I, Spiegelman E, Suchotzki K, Sutan A, Szecsi P, Tinghög G, Tisserand JC, Tran US, Van Hiel A, Vanpaemel W, Västfjäll D, Verliefde T, Vezirian K, Voracek M, Warmelink L, Wick K, Wiggins BJ, Wylie K, Yıldız E. Registered Replication Report on Srull and Wyer (1979). Advances in Methods and Practices in Psychological Science 2018. [DOI: 10.1177/2515245918777487] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Srull and Wyer (1979) demonstrated that exposing participants to more hostility-related stimuli caused them subsequently to interpret ambiguous behaviors as more hostile. In their Experiment 1, participants descrambled sets of words to form sentences. In one condition, 80% of the descrambled sentences described hostile behaviors, and in another condition, 20% described hostile behaviors. Following the descrambling task, all participants read a vignette about a man named Donald who behaved in an ambiguously hostile manner and then rated him on a set of personality traits. Next, participants rated the hostility of various ambiguously hostile behaviors (all ratings on scales from 0 to 10). Participants who descrambled mostly hostile sentences rated Donald and the ambiguous behaviors as approximately 3 scale points more hostile than did those who descrambled mostly neutral sentences. This Registered Replication Report describes the results of 26 independent replications ( N = 7,373 in the total sample; k = 22 labs and N = 5,610 in the primary analyses) of Srull and Wyer’s Experiment 1, each of which followed a preregistered and vetted protocol. A random-effects meta-analysis showed that the protagonist was seen as 0.08 scale points more hostile when participants were primed with 80% hostile sentences than when they were primed with 20% hostile sentences (95% confidence interval, CI = [0.004, 0.16]). The ambiguously hostile behaviors were seen as 0.08 points less hostile when participants were primed with 80% hostile sentences than when they were primed with 20% hostile sentences (95% CI = [−0.18, 0.01]). Although the confidence interval for one outcome excluded zero and the observed effect was in the predicted direction, these results suggest that the currently used methods do not produce an assimilative priming effect that is practically and routinely detectable.
Collapse
|
16
|
Abstract
The credibility revolution (sometimes referred to as the “replicability crisis”) in psychology has brought about many changes in the standards by which psychological science is evaluated. These changes include (a) greater emphasis on transparency and openness, (b) a move toward preregistration of research, (c) more direct-replication studies, and (d) higher standards for the quality and quantity of evidence needed to make strong scientific claims. What are the implications of these changes for productivity, creativity, and progress in psychological science? These questions can and should be studied empirically, and I present my predictions here. The productivity of individual researchers is likely to decline, although some changes (e.g., greater collaboration, data sharing) may mitigate this effect. The effects of these changes on creativity are likely to be mixed: Researchers will be less likely to pursue risky questions; more likely to use a broad range of methods, designs, and populations; and less free to define their own best practices and standards of evidence. Finally, the rate of scientific progress—the most important shared goal of scientists—is likely to increase as a result of these changes, although one’s subjective experience of making progress will likely become rarer.
Collapse
|
17
|
Krefeld-Schwalb A, Witte EH, Zenker F. Hypothesis-Testing Demands Trustworthy Data-A Simulation Approach to Inferential Statistics Advocating the Research Program Strategy. Front Psychol 2018; 9:460. [PMID: 29740363 PMCID: PMC5928294 DOI: 10.3389/fpsyg.2018.00460] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Accepted: 03/19/2018] [Indexed: 11/13/2022] Open
Abstract
In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H0-hypothesis to a statistical H1-verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a “pure” Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis.
Collapse
Affiliation(s)
| | - Erich H Witte
- Institute for Psychology, University of Hamburg, Hamburg, Germany
| | - Frank Zenker
- Department of Philosophy, Lund University, Lund, Sweden
| |
Collapse
|
18
|
Affiliation(s)
- Min Zhang
- Rady School of Management, University of California, San Diego
| | - Pamela K Smith
- Rady School of Management, University of California, San Diego
| |
Collapse
|
19
|
Affiliation(s)
- Leif D. Nelson
- Haas School of Business, University of California, Berkeley, California 94720
| | - Joseph Simmons
- The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104;,
| | - Uri Simonsohn
- The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104;,
| |
Collapse
|
20
|
Abstract
“Crowdsourcing” is a methodological approach in which several researchers coordinate their resources to achieve research goals that would otherwise be difficult to attain individually. This article introduces a Nexus—a collection of empirical and theoretical articles that will be published in Collabra: Psychology—that is intended to encourage more crowdsourced research in psychological science by providing a specific outlet for such projects and by assisting researchers in developing and executing their projects. We describe how individuals can propose and lead a crowdsourced research project, how individuals can contribute to other ongoing projects, and other ways to contribute to this Nexus. Ultimately, we hope this Nexus will contain a set of highly-informative articles that demonstrate the flexibility and range of the types of research questions that can be addressed with crowdsourced research methods.
Collapse
Affiliation(s)
- Randy J. McCarthy
- Center for the Study of Family Violence and Sexual Assault, Northern Illinois University, US
| | | |
Collapse
|
21
|
Abstract
AbstractMany philosophers of science and methodologists have argued that the ability to repeat studies and obtain similar results is an essential component of science. A finding is elevated from single observation to scientific evidence when the procedures that were used to obtain it can be reproduced and the finding itself can be replicated. Recent replication attempts show that some high profile results – most notably in psychology, but in many other disciplines as well – cannot be replicated consistently. These replication attempts have generated a considerable amount of controversy, and the issue of whether direct replications have value has, in particular, proven to be contentious. However, much of this discussion has occurred in published commentaries and social media outlets, resulting in a fragmented discourse. To address the need for an integrative summary, we review various types of replication studies and then discuss the most commonly voiced concerns about direct replication. We provide detailed responses to these concerns and consider different statistical ways to evaluate replications. We conclude there are no theoretical or statistical obstacles to making direct replication a routine aspect of psychological science.
Collapse
|
22
|
Rodrigues D, Lopes D, Kumashiro M. The "I" in us, or the eye on us? Regulatory focus, commitment and derogation of an attractive alternative person. PLoS One 2017; 12:e0174350. [PMID: 28319147 DOI: 10.1371/journal.pone.0174350] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Accepted: 03/07/2017] [Indexed: 11/19/2022] Open
Abstract
When individuals are highly committed to their romantic relationship, they are more likely to engage in pro-relationship maintenance mechanisms. The present research expanded on the notion that commitment redirects self-oriented goals to consider broader relational goals and examined whether commitment interacts with a promotion and prevention focus to activate derogation of attractive alternatives. Three studies used cross-sectional and experimental approaches. Study 1 showed that romantically involved individuals predominantly focused on promotion, but not prevention, reported less initial attraction to an attractive target than single individuals, especially when highly committed to their relationship. Study 2 showed that romantically involved individuals induced in a promotion focus, compared to those in prevention focus, reported less initial attraction, but only when more committed to their relationship. Regardless of regulatory focus manipulation, more committed individuals were also less likely to perceive quality among alternative scenarios and to be attentive to alternative others in general. Finally, Study 3 showed that romantically involved individuals induced in promotion focus and primed with high commitment reported less initial attraction, than those primed with low commitment, or than those induced in prevention focus. Once again, for these latter no differences occurred according to commitment prime. Together, the findings suggest that highly committed promotion focused individuals consider broader relationship goals and activate relationship maintenance behaviors such as derogation of attractive alternatives to promote their relationship.
Collapse
|
23
|
Marsman M, Schönbrodt FD, Morey RD, Yao Y, Gelman A, Wagenmakers EJ. A Bayesian bird's eye view of 'Replications of important results in social psychology'. R Soc Open Sci 2017; 4:160426. [PMID: 28280547 PMCID: PMC5319313 DOI: 10.1098/rsos.160426] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Accepted: 12/12/2016] [Indexed: 06/06/2023]
Abstract
We applied three Bayesian methods to reanalyse the preregistered contributions to the Social Psychology special issue 'Replications of Important Results in Social Psychology' (Nosek & Lakens. 2014 Registered reports: a method to increase the credibility of published results. Soc. Psychol.45, 137-141. (doi:10.1027/1864-9335/a000192)). First, individual-experiment Bayesian parameter estimation revealed that for directed effect size measures, only three out of 44 central 95% credible intervals did not overlap with zero and fell in the expected direction. For undirected effect size measures, only four out of 59 credible intervals contained values greater than [Formula: see text] (10% of variance explained) and only 19 intervals contained values larger than [Formula: see text]. Second, a Bayesian random-effects meta-analysis for all 38 t-tests showed that only one out of the 38 hierarchically estimated credible intervals did not overlap with zero and fell in the expected direction. Third, a Bayes factor hypothesis test was used to quantify the evidence for the null hypothesis against a default one-sided alternative. Only seven out of 60 Bayes factors indicated non-anecdotal support in favour of the alternative hypothesis ([Formula: see text]), whereas 51 Bayes factors indicated at least some support for the null hypothesis. We hope that future analyses of replication success will embrace a more inclusive statistical approach by adopting a wider range of complementary techniques.
Collapse
Affiliation(s)
- Maarten Marsman
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Felix D. Schönbrodt
- Department of Psychology, Ludwig-Maximilians-Universität München, Munchen, Germany
| | | | - Yuling Yao
- Department of Statistics, Columbia University, New York, NY, USA
| | - Andrew Gelman
- Department of Statistics, Columbia University, New York, NY, USA
| | | |
Collapse
|
24
|
Affiliation(s)
- Eli J. Finkel
- Department of Psychology and the Kellogg School of Management, Northwestern University
| |
Collapse
|