1
|
Zettersten M, Cox C, Bergmann C, Tsui ASM, Soderstrom M, Mayor J, Lundwall RA, Lewis M, Kosie JE, Kartushina N, Fusaroli R, Frank MC, Byers-Heinlein K, Black AK, Mathur MB. Evidence for Infant-directed Speech Preference Is Consistent Across Large-scale, Multi-site Replication and Meta-analysis. Open Mind (Camb) 2024; 8:439-461. [PMID: 38665547 PMCID: PMC11045035 DOI: 10.1162/opmi_a_00134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 02/19/2024] [Indexed: 04/28/2024] Open
Abstract
There is substantial evidence that infants prefer infant-directed speech (IDS) to adult-directed speech (ADS). The strongest evidence for this claim has come from two large-scale investigations: i) a community-augmented meta-analysis of published behavioral studies and ii) a large-scale multi-lab replication study. In this paper, we aim to improve our understanding of the IDS preference and its boundary conditions by combining and comparing these two data sources across key population and design characteristics of the underlying studies. Our analyses reveal that both the meta-analysis and multi-lab replication show moderate effect sizes (d ≈ 0.35 for each estimate) and that both of these effects persist when relevant study-level moderators are added to the models (i.e., experimental methods, infant ages, and native languages). However, while the overall effect size estimates were similar, the two sources diverged in the effects of key moderators: both infant age and experimental method predicted IDS preference in the multi-lab replication study, but showed no effect in the meta-analysis. These results demonstrate that the IDS preference generalizes across a variety of experimental conditions and sampling characteristics, while simultaneously identifying key differences in the empirical picture offered by each source individually and pinpointing areas where substantial uncertainty remains about the influence of theoretically central moderators on IDS preference. Overall, our results show how meta-analyses and multi-lab replications can be used in tandem to understand the robustness and generalizability of developmental phenomena.
Collapse
Affiliation(s)
| | - Christopher Cox
- Department of Linguistics, Cognitive Science and Semiotics, School of Communication and Culture, Aarhus University; Interacting Minds Center, School of Culture and Society, Aarhus University
| | | | | | | | - Julien Mayor
- Department of Linguistics and Scandinavian Studies, University of Oslo
| | | | - Molly Lewis
- Department of Psychology/Social and Decision Sciences, Carnegie Mellon University
| | | | | | - Riccardo Fusaroli
- Department of Linguistics, Cognitive Science and Semiotics, School of Communication and Culture, Aarhus University; Interacting Minds Center, School of Culture and Society, Aarhus University
| | | | | | - Alexis K. Black
- School of Audiology and Speech Sciences, University of British Columbia
| | | |
Collapse
|
2
|
Schäfer SK, Lüder CC, Porcheret K, Hu X, Margraf J, Michael T, Holmes EA, Werner GG, Wilhelm I, Woud ML, Zeng S, Friesen E, Haim-Nachum S, Lass-Hennemann J, Lieb K, Kunzler AM, Wirth BE, Sopp MR. To sleep or not to sleep, that is the question: A systematic review and meta-analysis on the effect of post-trauma sleep on intrusive memories of analog trauma. Behav Res Ther 2023; 167:104359. [PMID: 37422952 DOI: 10.1016/j.brat.2023.104359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 06/05/2023] [Accepted: 06/18/2023] [Indexed: 07/11/2023]
Abstract
Distressing intrusive memories of a traumatic event are one of the hallmark symptoms of posttraumatic stress disorder. Thus, it is crucial to identify early interventions that prevent the occurrence of intrusive memories. Both, sleep and sleep deprivation have been discussed as such interventions, yet previous studies yielded contradicting effects. Our systematic review aims at evaluating existing evidence by means of traditional and individual participant data (IPD) meta-analyses to overcome power issues of sleep research. Until May 16th, 2022, six databases were searched for experimental analog studies examining the effect of post-trauma sleep versus wakefulness on intrusive memories. Nine studies were included in our traditional meta-analysis (8 in the IPD meta-analysis). Our analysis provided evidence for a small effect favoring sleep over wakefulness, log-ROM = 0.25, p < .001, suggesting that sleep is associated with a lower number of intrusions but unrelated to the occurrence of any versus no intrusions. We found no evidence for an effect of sleep on intrusion distress. Heterogeneity was low and certainty of evidence for our primary analysis was moderate. Our findings suggest that post-trauma sleep has the potential to be protective by reducing intrusion frequency. More research is needed to determine the impact following real-world trauma and the potential clinical significance.
Collapse
Affiliation(s)
- Sarah K Schäfer
- Division of Clinical Psychology and Psychotherapy, Department of Psychology, Saarland University, Saarbrücken, Germany; Leibniz Institute for Resilience Research (LIR), Mainz, Germany; Technische Universität Braunschweig, Department of Clinical Psychology, Psychotherapy and Psychodiagnostics, Brunswick, Germany.
| | - Charina C Lüder
- Division of Clinical Psychology and Psychotherapy, Department of Psychology, Saarland University, Saarbrücken, Germany.
| | - Kate Porcheret
- Norwegian Center for Violence and Traumatic Stress Studies, Oslo, Norway; Institute of Clinical Medicine, University of Oslo, Oslo, Norway.
| | - Xiaoqing Hu
- Department of Psychology, The University of Hong Kong, Jockey Club Tower, Centennial Campus, Hong Kong, China; The State Key Laboratory of Brian and Cognitive Sciences, The University of Hong Kong, Jockey Club Tower, Centennial Campus, Hong Kong, China; HKU-Shenzhen Institute of Research and Innovation, Shenzhen, China.
| | - Jürgen Margraf
- Mental Health Research and Treatment Center, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany; DZPG (German Center for Mental Health), Germany.
| | - Tanja Michael
- Division of Clinical Psychology and Psychotherapy, Department of Psychology, Saarland University, Saarbrücken, Germany.
| | - Emily A Holmes
- Department of Clinical Neuroscience, Division of Psychology, Karolinska Institutet, Stockholm, Sweden; Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Gabriela G Werner
- Department of Clinical Psychology & Psychotherapy, LMU Munich, Munich, Germany.
| | - Ines Wilhelm
- Division of Experimental Psychopathology and Psychotherapy, Department of Psychology, University of Zurich, Zurich, Switzerland; Department of Psychiatry, Psychotherapy and Psychosomatics, Psychiatric Hospital, University of Zurich, Zurich, Switzerland; Department of Psychiatry and Psychotherapy, University of Luebeck, Luebeck, Germany.
| | - Marcella L Woud
- Mental Health Research and Treatment Center, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany.
| | - Shengzi Zeng
- Department of Psychology, The University of Hong Kong, Jockey Club Tower, Centennial Campus, Hong Kong, China.
| | - Edith Friesen
- Division of Clinical Psychology and Psychotherapy, Department of Psychology, Saarland University, Saarbrücken, Germany.
| | - Shilat Haim-Nachum
- Department of Psychiatry, Columbia University Irving Medical Center, New York, NY, USA.
| | - Johanna Lass-Hennemann
- Division of Clinical Psychology and Psychotherapy, Department of Psychology, Saarland University, Saarbrücken, Germany.
| | - Klaus Lieb
- Leibniz Institute for Resilience Research (LIR), Mainz, Germany; Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany.
| | - Angela M Kunzler
- Leibniz Institute for Resilience Research (LIR), Mainz, Germany; Institute for Evidence in Medicine, Medical Center & Faculty of Medicine, University of Freiburg, Freiburg, Germany.
| | - Benedikt E Wirth
- Divison of Cognition & Action, Department of Psychology, Saarland University, Saarbrücken, Germany; Department of Cognitive Assistants, German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany.
| | - M Roxanne Sopp
- Division of Clinical Psychology and Psychotherapy, Department of Psychology, Saarland University, Saarbrücken, Germany.
| |
Collapse
|
3
|
Wiechert S, Loewy L, Wessel I, Fawcett JM, Ben-Shakhar G, Pertzov Y, Verschuere B. Suppression-induced forgetting: a pre-registered replication of the think/no-think paradigm. Memory 2023; 31:989-1002. [PMID: 37165713 DOI: 10.1080/09658211.2023.2208791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 04/25/2023] [Indexed: 05/12/2023]
Abstract
Post-traumatic stress disorder is characterised by recurring memories of a traumatic experience despite deliberate attempts to forget (i.e., suppression). The Think/No-Think (TNT) task has been used widely in the laboratory to study suppression-induced forgetting. During the task, participants learn a series of cue-target word pairs. Subsequently, they are presented with a subset of the cue words and are instructed to think (respond items) or not think about the corresponding target (suppression items). Baseline items are not shown during this phase. Successful suppression-induced forgetting is indicated by the reduced recall of suppression compared to baseline items in recall tests using either the same or different cues than originally studied (i.e., same- and independent-probe tests, respectively). The current replication was a pre-registered collaborative effort to evaluate an online experimenter-present version of the paradigm in 150 English-speaking healthy individuals (89 females; MAge = 31.14, SDAge = 7.73). Overall, we did not replicate the suppression-induced forgetting effect (same-probe: BF01 = 7.84; d = 0.03 [95% CI: -0.13; 0.20]; independent-probe: BF01 = 5.71; d = 0.06 [95% CI: -0.12; 0.24]). These null results should be considered in light of our online implementation of the paradigm. Nevertheless, our findings call into question the robustness of suppression-induced forgetting.
Collapse
Affiliation(s)
- Sera Wiechert
- Department of Clinical Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Leonie Loewy
- Department of Clinical Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Ineke Wessel
- Department of Clinical Psychology and Experimental Psychopathology, University of Groningen, Groningen, The Netherlands
| | - Jonathan M Fawcett
- Department of Psychology, Memorial University of Newfoundland, St. John's, Canada
| | - Gershon Ben-Shakhar
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yoni Pertzov
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Bruno Verschuere
- Department of Clinical Psychology, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Mathur MB, Fox MP. Toward Open and Reproducible Epidemiology. Am J Epidemiol 2023; 192:658-664. [PMID: 36627249 PMCID: PMC10089067 DOI: 10.1093/aje/kwad007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 12/08/2022] [Accepted: 01/09/2023] [Indexed: 01/12/2023] Open
Abstract
Starting in the 2010s, researchers in the experimental social sciences rapidly began to adopt increasingly open and reproducible scientific practices. These practices include publicly sharing deidentified data when possible, sharing analytical code, and preregistering study protocols. Empirical evidence from the social sciences suggests such practices are feasible, can improve analytical reproducibility, and can reduce selective reporting. In academic epidemiology, adoption of open-science practices has been slower than in the social sciences (with some notable exceptions, such as registering clinical trials). Epidemiologic studies are often large, complex, conceived after data have already been collected, and difficult to replicate directly by collecting new data. These characteristics make it especially important to ensure their integrity and analytical reproducibility. Open-science practices can also pay immediate dividends to researchers' own work by clarifying scientific reasoning and encouraging well-documented, organized workflows. We consider how established epidemiologists and early-career researchers alike can help midwife a culture of open science in epidemiology through their research practices, mentorship, and editorial activities.
Collapse
Affiliation(s)
- Maya B Mathur
- Correspondence to Dr. Maya B. Mathur, Quantitative Sciences Unit, 3180 Porter Drive, Palo Alto, CA 94304 (e-mail: )
| | | |
Collapse
|
5
|
Maier M, VanderWeele TJ, Mathur MB. Using selection models to assess sensitivity to publication bias: A tutorial and call for more routine use. CAMPBELL SYSTEMATIC REVIEWS 2022; 18:e1256. [PMID: 36909879 PMCID: PMC9247867 DOI: 10.1002/cl2.1256] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In meta-analyses, it is critical to assess the extent to which publication bias might have compromised the results. Classical methods based on the funnel plot, including Egger's test and Trim-and-Fill, have become the de facto default methods to do so, with a large majority of recent meta-analyses in top medical journals (85%) assessing for publication bias exclusively using these methods. However, these classical funnel plot methods have important limitations when used as the sole means of assessing publication bias: they essentially assume that the publication process favors large point estimates for small studies and does not affect the largest studies, and they can perform poorly when effects are heterogeneous. In light of these limitations, we recommend that meta-analyses routinely apply other publication bias methods in addition to or instead of classical funnel plot methods. To this end, we describe how to use and interpret selection models. These methods make the often more realistic assumption that publication bias favors "statistically significant" results, and the methods also directly accommodate effect heterogeneity. Selection models have been established for decades in the statistics literature and are supported by user-friendly software, yet remain rarely reported in many disciplines. We use a previously published meta-analysis to demonstrate that selection models can yield insights that extend beyond those provided by funnel plot methods, suggesting the importance of establishing more comprehensive reporting practices for publication bias assessment.
Collapse
Affiliation(s)
- Maximilian Maier
- Department of Experimental PsychologyUniversity College LondonLondonUK
- Department of PsychologyUniversity of AmsterdamAmsterdamThe Netherlands
| | | | - Maya B. Mathur
- Quantitative Sciences Unit, Department of PediatricsStanford UniversityStanfordCaliforniaUSA
| |
Collapse
|
6
|
Mathur MB, VanderWeele TJ. How to report E-values for meta-analyses: Recommended improvements and additions to the new GRADE approach. ENVIRONMENT INTERNATIONAL 2022; 160:107032. [PMID: 34954645 PMCID: PMC8959014 DOI: 10.1016/j.envint.2021.107032] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
In a recent concept paper (Verbeek et al., 2021), the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Working Group provides a preliminary proposal to improve its existing guidelines for assessing sensitivity to uncontrolled confounding in meta-analyses of nonrandomized studies. The new proposal centers on reporting the E-value for the meta-analytic mean and on comparing this E-value to a measured "reference confounder" to determine whether residual uncontrolled confounding in the meta-analyzed studies could or could not plausibly explain away the meta-analytic mean. Although we agree that E-value analogs for meta-analyses could be an informative addition to future GRADE guidelines, we suggest improvements to the Verbeek et al. (2021)'s specific proposal regarding: (1) their interpretation of comparisons between the E-value and the strengths of associations of a reference confounder; (2) their characterization of evidence strength in meta-analyses in terms of only the meta-analytic mean; and (3) the possibility of confounding bias that is heterogeneous across studies.
Collapse
Affiliation(s)
- Maya B Mathur
- Quantitative Sciences Unit and Department of Pediatrics, Stanford University, United States.
| | | |
Collapse
|
7
|
Lewis M, Mathur MB, VanderWeele TJ, Frank MC. The puzzling relationship between multi-laboratory replications and meta-analyses of the published literature. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211499. [PMID: 35223059 PMCID: PMC8864345 DOI: 10.1098/rsos.211499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 01/10/2022] [Indexed: 05/03/2023]
Abstract
What is the best way to estimate the size of important effects? Should we aggregate across disparate findings using statistical meta-analysis, or instead run large, multi-laboratory replications (MLR)? A recent paper by Kvarven, Strømland and Johannesson (Kvarven et al. 2020 Nat. Hum. Behav. 4, 423-434. (doi:10.1038/s41562-019-0787-z)) compared effect size estimates derived from these two different methods for 15 different psychological phenomena. The authors reported that, for the same phenomenon, the meta-analytic estimate tended to be about three times larger than the MLR estimate. These results are a specific example of a broader question: What is the relationship between meta-analysis and MLR estimates? Kvarven et al. suggested that their results undermine the value of meta-analysis. By contrast, we argue that both meta-analysis and MLR are informative, and that the discrepancy between the two estimates that they observed is in fact still largely unexplained. Informed by re-analyses of Kvarven et al.'s data and by other empirical evidence, we discuss possible sources of this discrepancy and argue that understanding the relationship between estimates obtained from these two methods is an important puzzle for future meta-scientific research.
Collapse
Affiliation(s)
- Molly Lewis
- Department of Psychology Carnegie Mellon University, Pittsburgh, PA, USA
| | | | | | - Michael C. Frank
- Department of Psychology Stanford University, Palo Alto, CA, USA
| |
Collapse
|