1
|
Deery R, Commins S. Landmark Distance Impacts the Overshadowing Effect in Spatial Learning Using a Virtual Water Maze Task with Healthy Adults. Brain Sci 2023; 13:1287. [PMID: 37759887 PMCID: PMC10526441 DOI: 10.3390/brainsci13091287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 08/28/2023] [Accepted: 09/01/2023] [Indexed: 09/29/2023] Open
Abstract
Cue competition is a key element of many associative theories of learning. Overshadowing, an important aspect of cue competition, is a phenomenon in which learning about a cue is reduced when it is accompanied by a second cue. Overshadowing has been observed across many domains, but there has been limited investigation of overshadowing in human spatial learning. This experiment explored overshadowing using two landmarks/cues (at different distances to the goal) in a virtual water maze task with young, healthy adult participants. Experiment 1 initially examined whether the cues used were equally salient. Results indicated that both gained equal control over performance. In experiment 2, overshadowing was examined using the two cues from experiment 1. Results indicated that overshadowing occurred during spatial learning and that the near cue controlled searching significantly more than the far cue. Furthermore, the far cue appeared to have been completely ignored, suggesting that learning strategies requiring the least amount of effort were employed by participants. Evidence supporting an associative account of human spatial navigation and the influence of proximal cues was discussed.
Collapse
Affiliation(s)
| | - Seán Commins
- Psychology Department, Maynooth University, W23 F2K8 Kildare, Ireland
| |
Collapse
|
2
|
Jaljuli I, Kafkafi N, Giladi E, Golani I, Gozes I, Chesler EJ, Bogue MA, Benjamini Y. A multi-lab experimental assessment reveals that replicability can be improved by using empirical estimates of genotype-by-lab interaction. PLoS Biol 2023; 21:e3002082. [PMID: 37126512 PMCID: PMC10174519 DOI: 10.1371/journal.pbio.3002082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 05/11/2023] [Accepted: 03/15/2023] [Indexed: 05/02/2023] Open
Abstract
The utility of mouse and rat studies critically depends on their replicability in other laboratories. A widely advocated approach to improving replicability is through the rigorous control of predefined animal or experimental conditions, known as standardization. However, this approach limits the generalizability of the findings to only to the standardized conditions and is a potential cause rather than solution to what has been called a replicability crisis. Alternative strategies include estimating the heterogeneity of effects across laboratories, either through designs that vary testing conditions, or by direct statistical analysis of laboratory variation. We previously evaluated our statistical approach for estimating the interlaboratory replicability of a single laboratory discovery. Those results, however, were from a well-coordinated, multi-lab phenotyping study and did not extend to the more realistic setting in which laboratories are operating independently of each other. Here, we sought to test our statistical approach as a realistic prospective experiment, in mice, using 152 results from 5 independent published studies deposited in the Mouse Phenome Database (MPD). In independent replication experiments at 3 laboratories, we found that 53 of the results were replicable, so the other 99 were considered non-replicable. Of the 99 non-replicable results, 59 were statistically significant (at 0.05) in their original single-lab analysis, putting the probability that a single-lab statistical discovery was made even though it is non-replicable, at 59.6%. We then introduced the dimensionless "Genotype-by-Laboratory" (GxL) factor-the ratio between the standard deviations of the GxL interaction and the standard deviation within groups. Using the GxL factor reduced the number of single-lab statistical discoveries and alongside reduced the probability of a non-replicable result to be discovered in the single lab to 12.1%. Such reduction naturally leads to reduced power to make replicable discoveries, but this reduction was small (from 87% to 66%), indicating the small price paid for the large improvement in replicability. Tools and data needed for the above GxL adjustment are publicly available at the MPD and will become increasingly useful as the range of assays and testing conditions in this resource increases.
Collapse
Affiliation(s)
- Iman Jaljuli
- Department of Statistics and Operations Research, Tel-Aviv University, Tel-Aviv, Israel
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, New York, United States of America
| | - Neri Kafkafi
- Department of Statistics and Operations Research, Tel-Aviv University, Tel-Aviv, Israel
- School of Zoology, Faculty of Life Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Eliezer Giladi
- The Elton Laboratory for Molecular Neuroendocrinology, Department of Human Molecular Genetics and Biochemistry, Sackler Faculty of Medicine, Sagol School of Neuroscience and Adams Super Center for Brain Studies, Tel Aviv University, Tel Aviv, Israel
| | - Ilan Golani
- School of Zoology, Faculty of Life Sciences, Tel Aviv University, Tel Aviv, Israel
- The Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Illana Gozes
- The Elton Laboratory for Molecular Neuroendocrinology, Department of Human Molecular Genetics and Biochemistry, Sackler Faculty of Medicine, Sagol School of Neuroscience and Adams Super Center for Brain Studies, Tel Aviv University, Tel Aviv, Israel
- The Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Elissa J Chesler
- The Jackson Laboratory, Bar Harbor, Maine, United States of America
| | - Molly A Bogue
- The Jackson Laboratory, Bar Harbor, Maine, United States of America
| | - Yoav Benjamini
- Department of Statistics and Operations Research, Tel-Aviv University, Tel-Aviv, Israel
- The Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
3
|
Theoretical false positive psychology. Psychon Bull Rev 2022; 29:1751-1775. [PMID: 35501547 DOI: 10.3758/s13423-022-02098-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 11/08/2022]
Abstract
A fundamental goal of scientific research is to generate true positives (i.e., authentic discoveries). Statistically, a true positive is a significant finding for which the underlying effect size (δ) is greater than 0, whereas a false positive is a significant finding for which δ equals 0. However, the null hypothesis of no difference (δ = 0) may never be strictly true because innumerable nuisance factors can introduce small effects for theoretically uninteresting reasons. If δ never equals zero, then with sufficient power, every experiment would yield a significant result. Yet running studies with higher power by increasing sample size (N) is one of the most widely agreed upon reforms to increase replicability. Moreover, and perhaps not surprisingly, the idea that psychology should attach greater value to small effect sizes is gaining currency. Increasing N without limit makes sense for purely measurement-focused research, where the magnitude of δ itself is of interest, but it makes less sense for theory-focused research, where the truth status of the theory under investigation is of interest. Increasing power to enhance replicability will increase true positives at the level of the effect size (statistical true positives) while increasing false positives at the level of theory (theoretical false positives). With too much power, the cumulative foundation of psychological science would consist largely of nuisance effects masquerading as theoretically important discoveries. Positive predictive value at the level of theory is maximized by using an optimal N, one that is neither too small nor too large.
Collapse
|
4
|
Marmurek HHC, Rusyn R, Zgardau A, Zgardau AM. Verbal overshadowing at an immediate Task-Test delay is independent of Video-Task delay. JOURNAL OF COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1080/20445911.2021.1981916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
| | - Richard Rusyn
- Department of Psychology, University of Guelph, Guelph, Canada
| | - Alina Zgardau
- Department of Psychology, University of Guelph, Guelph, Canada
| | | |
Collapse
|
5
|
Protzko J, Schooler JW. No relationship between researcher impact and replication effect: an analysis of five studies with 100 replications. PeerJ 2020; 8:e8014. [PMID: 32231868 PMCID: PMC7100597 DOI: 10.7717/peerj.8014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 10/09/2019] [Indexed: 11/30/2022] Open
Abstract
What explanation is there when teams of researchers are unable to successfully replicate already established 'canonical' findings? One suggestion that has been put forward, but left largely untested, is that those researchers who fail to replicate prior studies are of low 'expertise and diligence' and lack the skill necessary to successfully replicate the conditions of the original experiment. Here we examine the replication success of 100 scientists of differing 'expertise and diligence' who attempted to replicate five different studies. Using a bibliometric tool (h-index) as our indicator of researcher 'expertise and diligence', we examine whether this was predictive of replication success. Although there was substantial variability in replication success and in the h-factor of the investigators, we find no relationship between these variables. The present results provide no evidence for the hypothesis that systematic replications fail because of low 'expertise and diligence' among replicators.
Collapse
Affiliation(s)
- John Protzko
- Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, United States of America
| | - Jonathan W. Schooler
- Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, United States of America
| |
Collapse
|
6
|
Altoè G, Bertoldo G, Zandonella Callegher C, Toffalini E, Calcagnì A, Finos L, Pastore M. Enhancing Statistical Inference in Psychological Research via Prospective and Retrospective Design Analysis. Front Psychol 2020; 10:2893. [PMID: 31993004 PMCID: PMC6970975 DOI: 10.3389/fpsyg.2019.02893] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 12/06/2019] [Indexed: 11/13/2022] Open
Abstract
In the past two decades, psychological science has experienced an unprecedented replicability crisis, which has uncovered several issues. Among others, the use and misuse of statistical inference plays a key role in this crisis. Indeed, statistical inference is too often viewed as an isolated procedure limited to the analysis of data that have already been collected. Instead, statistical reasoning is necessary both at the planning stage and when interpreting the results of a research project. Based on these considerations, we build on and further develop an idea proposed by Gelman and Carlin (2014) termed "prospective and retrospective design analysis." Rather than focusing only on the statistical significance of a result and on the classical control of type I and type II errors, a comprehensive design analysis involves reasoning about what can be considered a plausible effect size. Furthermore, it introduces two relevant inferential risks: the exaggeration ratio or Type M error (i.e., the predictable average overestimation of an effect that emerges as statistically significant) and the sign error or Type S error (i.e., the risk that a statistically significant effect is estimated in the wrong direction). Another important aspect of design analysis is that it can be usefully carried out both in the planning phase of a study and for the evaluation of studies that have already been conducted, thus increasing researchers' awareness during all phases of a research project. To illustrate the benefits of a design analysis to the widest possible audience, we use a familiar example in psychology where the researcher is interested in analyzing the differences between two independent groups considering Cohen's d as an effect size measure. We examine the case in which the plausible effect size is formalized as a single value, and we propose a method in which uncertainty concerning the magnitude of the effect is formalized via probability distributions. Through several examples and an application to a real case study, we show that, even though a design analysis requires significant effort, it has the potential to contribute to planning more robust and replicable studies. Finally, future developments in the Bayesian framework are discussed.
Collapse
Affiliation(s)
- Gianmarco Altoè
- Department of Developmental Psychology and Socialisation, University of Padova, Padova, Italy
| | - Giulia Bertoldo
- Department of Developmental Psychology and Socialisation, University of Padova, Padova, Italy
| | | | - Enrico Toffalini
- Department of General Psychology, University of Padova, Padova, Italy
| | - Antonio Calcagnì
- Department of Developmental Psychology and Socialisation, University of Padova, Padova, Italy
| | - Livio Finos
- Department of Developmental Psychology and Socialisation, University of Padova, Padova, Italy
| | - Massimiliano Pastore
- Department of Developmental Psychology and Socialisation, University of Padova, Padova, Italy
| |
Collapse
|
7
|
Yeager DS, Krosnick JA, Visser PS, Holbrook AL, Tahk AM. Moderation of classic social psychological effects by demographics in the U.S. adult population: New opportunities for theoretical advancement. J Pers Soc Psychol 2019; 117:e84-e99. [PMID: 31464480 PMCID: PMC6918461 DOI: 10.1037/pspa0000171] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
For decades, social psychologists have collected data primarily from college undergraduates and, recently, from haphazard samples of adults. Yet researchers have routinely presumed that thus observed treatment effects characterize "people" in general. Tests of seven highly cited social psychological phenomena (two involving opinion change resulting from social influence and five involving the use of heuristics in social judgments) using data collected from randomly sampled, representative groups of American adults documented generalizability of the six phenomena that have been replicated previously with undergraduate samples. The 1 phenomenon (a cross-over interaction revealing an ease of retrieval effect) that has not been replicated successfully previously in undergraduate samples was also not observed here. However, the observed effect sizes for the replicated phenomena were notably smaller on average than the meta-analytic effect sizes documented by past studies of college students. Furthermore, the phenomena were strongest among participants with the demographic characteristics of the college students who typically provided data for past published studies, even after correcting for publication bias in past studies using a new method, called the behaviorally-informed file-drawer adjustment. The six successful replications suggest that phenomena identified in traditional laboratory research also appear as expected in representative samples but more weakly, so observed effect sizes should be generalized with caution. The evidence of demographic moderators suggests interesting opportunities for future research to better understand the mechanisms of the effects and their limiting conditions. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Jon A Krosnick
- Department of Communications, Political Science, and Psychology
| | | | | | | |
Collapse
|
8
|
Laraway S, Snycerski S, Pradhan S, Huitema BE. An Overview of Scientific Reproducibility: Consideration of Relevant Issues for Behavior Science/Analysis. Perspect Behav Sci 2019; 42:33-57. [PMID: 31976420 PMCID: PMC6701706 DOI: 10.1007/s40614-019-00193-3] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
For over a decade, the failure to reproduce findings in several disciplines, including the biomedical, behavioral, and social sciences, have led some authors to claim that there is a so-called "replication (or reproducibility) crisis" in those disciplines. The current article examines: (a) various aspects of the reproducibility of scientific studies, including definitions of reproducibility; (b) published concerns about reproducibility in the scientific literature and public press; (c) variables involved in assessing the success of attempts to reproduce a study; (d) suggested factors responsible for reproducibility failures; (e) types of validity of experimental studies and threats to validity as they relate to reproducibility; and (f) evidence for threats to reproducibility in the behavior science/analysis literature. Suggestions for improving the reproducibility of studies in behavior science and analysis are described throughout.
Collapse
Affiliation(s)
- Sean Laraway
- Department of Psychology, San José State University, San José, CA 95192-0120 USA
| | - Susan Snycerski
- Department of Psychology, San José State University, San José, CA 95192-0120 USA
| | | | | |
Collapse
|
9
|
ONDERSMA STEVENJ, MARTINO STEVE, SVIKIS DACES, YONKERS KIMBERLYA. Commentary on Kim et al. (2017): Staying focused on non-treatment seekers. Addiction 2017; 112:828-829. [PMID: 28378329 PMCID: PMC6552680 DOI: 10.1111/add.13736] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Accepted: 12/20/2016] [Indexed: 11/27/2022]
Abstract
Negative results for screening, brief intervention and referral to treatment (SBIRT) trials continue to build. These findings should accelerate rather than suppress research regarding how best to identify and intervene proactively with non-treatment seeking samples.
Collapse
Affiliation(s)
- STEVEN J. ONDERSMA
- Wayne State University, Merrill-Palmer Skillman Institute
and Department of Psychiatry and Behavioral Neurosciences, Detroit, MA, USA
| | - STEVE MARTINO
- Yale University School of Medicine, Department of
Psychiatry; VA Connecticut Health System West Haven Campus, New Haven, CT,
USA
| | - DACE S. SVIKIS
- Virginia Commonwealth University, Department of Psychology
and Institute for Women’s Health, Richmond, VA, USA
| | - KIMBERLY A. YONKERS
- Yale University School of Medicine, Department of
Psychiatry, New Haven, CT, USA
| |
Collapse
|
10
|
Yeager DS, Romero C, Paunesku D, Hulleman CS, Schneider B, Hinojosa C, Lee HY, O'Brien J, Flint K, Roberts A, Trott J, Greene D, Walton GM, Dweck CS. Using Design Thinking to Improve Psychological Interventions: The Case of the Growth Mindset During the Transition to High School. JOURNAL OF EDUCATIONAL PSYCHOLOGY 2016; 108:374-391. [PMID: 27524832 PMCID: PMC4981081 DOI: 10.1037/edu0000098] [Citation(s) in RCA: 186] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
There are many promising psychological interventions on the horizon, but there is no clear methodology for preparing them to be scaled up. Drawing on design thinking, the present research formalizes a methodology for redesigning and tailoring initial interventions. We test the methodology using the case of fixed versus growth mindsets during the transition to high school. Qualitative inquiry and rapid, iterative, randomized "A/B" experiments were conducted with ~3,000 participants to inform intervention revisions for this population. Next, two experimental evaluations showed that the revised growth mindset intervention was an improvement over previous versions in terms of short-term proxy outcomes (Study 1, N=7,501), and it improved 9th grade core-course GPA and reduced D/F GPAs for lower achieving students when delivered via the Internet under routine conditions with ~95% of students at 10 schools (Study 2, N=3,676). Although the intervention could still be improved even further, the current research provides a model for how to improve and scale interventions that begin to address pressing educational problems. It also provides insight into how to teach a growth mindset more effectively.
Collapse
|