1
|
Kristensen MWP, Biuk B, Nielsen J, Bojesen KB, Nielsen MØ. Glutamate, GABA and NAA in treatment-resistant schizophrenia: A systematic review of the effect of clozapine and group differences between clozapine-responders and non-responders. Behav Brain Res 2025; 479:115338. [PMID: 39566584 DOI: 10.1016/j.bbr.2024.115338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 10/30/2024] [Accepted: 11/12/2024] [Indexed: 11/22/2024]
Abstract
Treatment-resistance in patients with schizophrenia is a major obstacle for improving outcome in patients, especially in those not gaining from clozapine. Novel research implies that glutamatergic and GABAergic abnormalities may be present in treatment-resistant patients, and preclinical research suggests that clozapine affects the GABAergic system. Moreover, clozapine may have a neuroprotective role. To investigate these issues, we conducted a systematic review to evaluate the relationship between clozapine and in vivo measures of gamma-aminobutyric acid (GABA), glutamate (glu), and N-acetylaspartate (NAA) brain levels in treatment- and ultra-treatment-resistant schizophrenia patients (TRS and UTRS). Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we included three longitudinal and six cross sectional studies utilizing proton magnetic resonance spectroscopy (H-MRS) that explored brain metabolite levels in clozapine-treated patients. Findings were limited by a small number of studies and definite conclusions cannot be drawn, but the present studies may imply that clozapine reduces glutamate levels in striatal but not cortical areas, whereas glutamatergic metabolites and GABA levels may be increased in ACC in the combined group of TRS and UTRS. Clozapine may also increase NAA in cortical areas. Importantly, this review highlights the need for further clinical studies investigating the effect of clozapine on brain levels of glutamate, GABA, and NAA as well as metabolite group differences in patients with UTRS compared with TRS.
Collapse
Affiliation(s)
- Milo Wolfgang Pilgaard Kristensen
- Mental Health Centre Glostrup, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark; Department of Clinical Medicine, University of Copenhagen, Blegdamsvej 3B, Copenhagen 2200, Denmark.
| | - Bahast Biuk
- Mental Health Centre Glostrup, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark; Department of Clinical Medicine, University of Copenhagen, Blegdamsvej 3B, Copenhagen 2200, Denmark
| | - Jimmi Nielsen
- Mental Health Centre Glostrup, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark; Department of Clinical Medicine, University of Copenhagen, Blegdamsvej 3B, Copenhagen 2200, Denmark
| | - Kirsten Borup Bojesen
- Center for Neuropsychiatric Schizophrenia Research (CNSR), Mental Health Center Glostrup, Copenhagen University hospital - Mental Health Services CPH, Copenhagen, Denmark
| | - Mette Ødegaard Nielsen
- Mental Health Centre Glostrup, Copenhagen University Hospital - Mental Health Services CPH, Copenhagen, Denmark; Department of Clinical Medicine, University of Copenhagen, Blegdamsvej 3B, Copenhagen 2200, Denmark
| |
Collapse
|
2
|
Lepauvre A, Hirschhorn R, Bendtz K, Mudrik L, Melloni L. A standardized framework to test event-based experiments. Behav Res Methods 2024; 56:8852-8868. [PMID: 39285141 PMCID: PMC11525435 DOI: 10.3758/s13428-024-02508-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/15/2024] [Indexed: 11/01/2024]
Abstract
The replication crisis in experimental psychology and neuroscience has received much attention recently. This has led to wide acceptance of measures to improve scientific practices, such as preregistration and registered reports. Less effort has been devoted to performing and reporting the results of systematic tests of the functioning of the experimental setup itself. Yet, inaccuracies in the performance of the experimental setup may affect the results of a study, lead to replication failures, and importantly, impede the ability to integrate results across studies. Prompted by challenges we experienced when deploying studies across six laboratories collecting electroencephalography (EEG)/magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and intracranial EEG (iEEG), here we describe a framework for both testing and reporting the performance of the experimental setup. In addition, 100 researchers were surveyed to provide a snapshot of current common practices and community standards concerning testing in published experiments' setups. Most researchers reported testing their experimental setups. Almost none, however, published the tests performed or their results. Tests were diverse, targeting different aspects of the setup. Through simulations, we clearly demonstrate how even slight inaccuracies can impact the final results. We end with a standardized, open-source, step-by-step protocol for testing (visual) event-related experiments, shared via protocols.io. The protocol aims to provide researchers with a benchmark for future replications and insights into the research quality to help improve the reproducibility of results, accelerate multicenter studies, increase robustness, and enable integration across studies.
Collapse
Affiliation(s)
- Alex Lepauvre
- Neural Circuits, Consciousness and Cognition Research Group, Max Planck Institute of Empirical Aesthetics, Frankfurt am Main, Germany.
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, 6500 HB, the Netherlands.
| | - Rony Hirschhorn
- Sagol School of Neuroscience, Tel-Aviv University, Tel Aviv, Israel
| | - Katarina Bendtz
- Boston Children's Hospital, Harvard Medical School, Boston, USA
| | - Liad Mudrik
- Sagol School of Neuroscience, Tel-Aviv University, Tel Aviv, Israel
- School of Psychological Sciences, Tel-Aviv University, Tel Aviv, Israel
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada
| | - Lucia Melloni
- Neural Circuits, Consciousness and Cognition Research Group, Max Planck Institute of Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Neurology, NYU Grossman School of Medicine, New York, USA
- Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada
| |
Collapse
|
3
|
Else H. 'Doing good science is hard': retraction of high-profile reproducibility study prompts soul-searching. Nature 2024:10.1038/d41586-024-03178-8. [PMID: 39402293 DOI: 10.1038/d41586-024-03178-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
|
4
|
Bak-Coleman J, Devezer B. Claims about scientific rigour require rigour. Nat Hum Behav 2024; 8:1890-1891. [PMID: 39317793 DOI: 10.1038/s41562-024-01982-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 08/09/2024] [Indexed: 09/26/2024]
Affiliation(s)
- Joseph Bak-Coleman
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany.
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Konstanz, Germany.
| | - Berna Devezer
- Department of Business, University of Idaho, Moscow, ID, USA
- Department of Mathematics and Statistical Science, University of Idaho, Moscow, ID, USA
| |
Collapse
|
5
|
Jarvis MF. Decatastrophizing research irreproducibility. Biochem Pharmacol 2024; 228:116090. [PMID: 38408680 DOI: 10.1016/j.bcp.2024.116090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 02/03/2024] [Accepted: 02/23/2024] [Indexed: 02/28/2024]
Abstract
The reported inability to replicate research findings from the published literature precipitated extensive efforts to identify and correct perceived deficiencies in the execution and reporting of biomedical research. Despite these efforts, quantification of the magnitude of irreproducible research or the effectiveness of associated remediation initiatives, across diverse biomedical disciplines, has made little progress over the last decade. The idea that science is self-correcting has been further challenged in recent years by the proliferation of unverified or fraudulent scientific content generated by predatory journals, paper mills, pre-print server postings, and the inappropriate use of artificial intelligence technologies. The degree to which the field of pharmacology has been negatively impacted by these evolving pressures is unknown. Regardless of these ambiguities, pharmacology societies and their associated journals have championed best practices to enhance the experimental rigor and reporting of pharmacological research. The value of transparent and independent validation of raw data generation and its analysis in basic and clinical research is exemplified by the discovery, development, and approval of Highly Effective Modulator Therapy (HEMT) for Cystic Fibrosis (CF) patients. This provides a didactic counterpoint to concerns regarding the current state of biomedical research. Key features of this important therapeutic advance include objective construction of basic and translational research hypotheses, associated experimental designs, and validation of experimental effect sizes with quantitative alignment to meaningful clinical endpoints with input from the FDA, which enhanced scientific rigor and transparency with real world deliverables for patients in need.
Collapse
Affiliation(s)
- Michael F Jarvis
- Department of Pharmaceutical Sciences, University of Illinois-Chicago, USA.
| |
Collapse
|
6
|
da Costa GG, Neves K, Amaral O. Estimating the replicability of highly cited clinical research (2004-2018). PLoS One 2024; 19:e0307145. [PMID: 39110675 PMCID: PMC11305584 DOI: 10.1371/journal.pone.0307145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 07/01/2024] [Indexed: 08/10/2024] Open
Abstract
INTRODUCTION Previous studies about the replicability of clinical research based on the published literature have suggested that highly cited articles are often contradicted or found to have inflated effects. Nevertheless, there are no recent updates of such efforts, and this situation may have changed over time. METHODS We searched the Web of Science database for articles studying medical interventions with more than 2000 citations, published between 2004 and 2018 in high-impact medical journals. We then searched for replications of these studies in PubMed using the PICO (Population, Intervention, Comparator and Outcome) framework. Replication success was evaluated by the presence of a statistically significant effect in the same direction and by overlap of the replication's effect size confidence interval (CIs) with that of the original study. Evidence of effect size inflation and potential predictors of replicability were also analyzed. RESULTS A total of 89 eligible studies, of which 24 had valid replications (17 meta-analyses and 7 primary studies) were found. Of these, 21 (88%) had effect sizes with overlapping CIs. Of 15 highly cited studies with a statistically significant difference in the primary outcome, 13 (87%) had a significant effect in the replication as well. When both criteria were considered together, the replicability rate in our sample was of 20 out of 24 (83%). There was no evidence of systematic inflation in these highly cited studies, with a mean effect size ratio of 1.03 [95% CI (0.88, 1.21)] between initial and subsequent effects. Due to the small number of contradicted results, our analysis had low statistical power to detect predictors of replicability. CONCLUSION Although most studies did not have eligible replications, the replicability rate of highly cited clinical studies in our sample was higher than in previous estimates, with little evidence of systematic effect size inflation. This estimate is based on a very select sample of studies and may not be generalizable to clinical research in general.
Collapse
Affiliation(s)
- Gabriel Gonçalves da Costa
- Institute of Medical Biochemistry Leopoldo de Meis, Federal University of Rio de Janeiro, Rio de Janeiro, Rio de Janeiro, Brazil
| | - Kleber Neves
- Institute of Medical Biochemistry Leopoldo de Meis, Federal University of Rio de Janeiro, Rio de Janeiro, Rio de Janeiro, Brazil
| | - Olavo Amaral
- Institute of Medical Biochemistry Leopoldo de Meis, Federal University of Rio de Janeiro, Rio de Janeiro, Rio de Janeiro, Brazil
| |
Collapse
|
7
|
Held L, Pawel S, Micheloud C. The assessment of replicability using the sum of p-values. ROYAL SOCIETY OPEN SCIENCE 2024; 11:240149. [PMID: 39205991 PMCID: PMC11349439 DOI: 10.1098/rsos.240149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 05/30/2024] [Accepted: 06/26/2024] [Indexed: 09/04/2024]
Abstract
Statistical significance of both the original and the replication study is a commonly used criterion to assess replication attempts, also known as the two-trials rule in drug development. However, replication studies are sometimes conducted although the original study is non-significant, in which case Type-I error rate control across both studies is no longer guaranteed. We propose an alternative method to assess replicability using the sum of p -values from the two studies. The approach provides a combined p -value and can be calibrated to control the overall Type-I error rate at the same level as the two-trials rule but allows for replication success even if the original study is non-significant. The unweighted version requires a less restrictive level of significance at replication if the original study is already convincing which facilitates sample size reductions of up to 10%. Downweighting the original study accounts for possible bias and requires a more stringent significance level and larger sample sizes at replication. Data from four large-scale replication projects are used to illustrate and compare the proposed method with the two-trials rule, meta-analysis and Fisher's combination method.
Collapse
Affiliation(s)
- Leonhard Held
- Epidemiology Biostatistics and Prevention Institute (EBPI) and Center for Reproducible Science (CRS), University of Zurich, Hirschengraben 84, Zurich8001, Switzerland
| | - Samuel Pawel
- Epidemiology Biostatistics and Prevention Institute (EBPI) and Center for Reproducible Science (CRS), University of Zurich, Hirschengraben 84, Zurich8001, Switzerland
| | - Charlotte Micheloud
- Epidemiology Biostatistics and Prevention Institute (EBPI) and Center for Reproducible Science (CRS), University of Zurich, Hirschengraben 84, Zurich8001, Switzerland
| |
Collapse
|
8
|
DeKay ML, Dou S. Risky-Choice Framing Effects Result Partly From Mismatched Option Descriptions in Gains and Losses. Psychol Sci 2024; 35:918-932. [PMID: 38889328 DOI: 10.1177/09567976241249183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2024] Open
Abstract
Textbook psychology holds that people usually prefer a certain option over a risky one when options are framed as gains but prefer the opposite when options are framed as losses. However, this pattern can be amplified, eliminated, or reversed depending on whether option descriptions include only positive information (e.g., "200 people will be saved"), only negative information (e.g., "400 people will not be saved"), or both. Previous studies suggest that framing effects arise only when option descriptions are mismatched across frames. Using online and student samples (Ns = 906 and 521), we investigated 81 framing-effect variants created from matched and mismatched pairs of 18 option descriptions (nine in each frame). Description valence or gist explained substantial variation in risk preferences (prospect theory does not predict such variation), but a considerable framing effect remained in our balanced design. Risky-choice framing effects appear to be partly-but not completely-the result of mismatched comparisons.
Collapse
Affiliation(s)
| | - Shiyu Dou
- Department of Psychology, The Ohio State University
- Nationwide Mutual Insurance Company
| |
Collapse
|
9
|
|