1
|
Romantsik O, Bank M, Menon JML, Malhotra A, Bruschettini M. Value of preclinical systematic reviews and meta-analyses in pediatric research. Pediatr Res 2024; 96:643-653. [PMID: 38615075 PMCID: PMC11499280 DOI: 10.1038/s41390-024-03197-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 03/15/2024] [Accepted: 03/23/2024] [Indexed: 04/15/2024]
Abstract
Similar to systematic reviews (SRs) in clinical fields, preclinical SRs address a specific research area, furnishing information on current knowledge, possible gaps, and potential methodological flaws of study design, conduct, and report. One of the main goals of preclinical SRs is to identify aspiring treatment strategies and evaluate if currently available data is solid enough to translate to clinical trials or highlight the gaps, thus justifying the need for new studies. It is imperative to rigorously follow the methodological standards that are widely available. These include registration of the protocol and adherence to guidelines for assessing the risk of bias, study quality, and certainty of evidence. A special consideration should be made for pediatric SRs, clinical and preclinical, due to the unique characteristics of this age group. These include rationale for intervention and comparison of primary and secondary outcomes. Outcomes measured should acknowledge age-related physiological changes and maturational processes of different organ systems. It is crucial to choose the age of the animals appropriately and its possible correspondence for specific pediatric age groups. The findings of well-conducted SRs of preclinical studies have the potential to provide a reliable evidence synthesis to guide the design of future preclinical and clinical studies. IMPACT: This narrative review highlights the importance of rigorous design, conduct and reporting of preclinical primary studies and systematic reviews. A special consideration should be made for pediatric systematic reviews of preclinical studies, due to the unique characteristics of this age group.
Collapse
Affiliation(s)
- Olga Romantsik
- Department of Clinical Sciences Lund, Division of Pediatrics, Lund University, Skåne University Hospital, Lund, 21185, Sweden.
| | - Matthias Bank
- Library and ICT, Faculty of Medicine, Lund University, Lund, Sweden
| | - Julia M L Menon
- Preclinicaltrials.eu, Netherlands Heart Institute, Utrecht, The Netherlands
| | - Atul Malhotra
- Department of Pediatrics, Monash University, Melbourne, Australia
- Monash Newborn, Monash Children's Hospital, Melbourne, Australia
- The Ritchie Centre, Hudson Institute of Medical Research, Melbourne, Australia
| | - Matteo Bruschettini
- Department of Clinical Sciences Lund, Division of Pediatrics, Lund University, Skåne University Hospital, Lund, 21185, Sweden
| |
Collapse
|
2
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Br J Pharmacol 2024; 181:180-210. [PMID: 37282770 DOI: 10.1111/bph.16100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/08/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
3
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Acta Anaesthesiol Scand 2023; 67:1148-1177. [PMID: 37288997 DOI: 10.1111/aas.14295] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/09/2023]
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
4
|
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P.A. Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
5
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. BMC Infect Dis 2023; 23:383. [PMID: 37286949 DOI: 10.1186/s12879-023-08304-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 05/03/2023] [Indexed: 06/09/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
6
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Syst Rev 2023; 12:96. [PMID: 37291658 DOI: 10.1186/s13643-023-02255-9] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 02/19/2023] [Indexed: 06/10/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
7
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to Best Tools and Practices for Systematic Reviews. JBJS Rev 2023; 11:01874474-202306000-00009. [PMID: 37285444 DOI: 10.2106/jbjs.rvw.23.00077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
» Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.» A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.» Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California
| |
Collapse
|
8
|
Kolaski K, Romeiser Logan L, Ioannidis JPA. Guidance to best tools and practices for systematic reviews1. J Pediatr Rehabil Med 2023; 16:241-273. [PMID: 37302044 DOI: 10.3233/prm-230019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/12/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
9
|
Kolaski K, Romeiser Logan L, Goss KD, Butler C. Quality appraisal of systematic reviews of interventions for children with cerebral palsy reveals critically low confidence. Dev Med Child Neurol 2021; 63:1316-1326. [PMID: 34091900 DOI: 10.1111/dmcn.14949] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/06/2021] [Indexed: 01/07/2023]
Abstract
AIM To evaluate the methodological quality of recent systematic reviews of interventions for children with cerebral palsy in order to determine the level of confidence in the reviews' conclusions. METHOD A comprehensive search of 22 databases identified eligible systematic reviews with and without meta-analysis published worldwide from 2015 to 2019. We independently extracted data and used A Measurement Tool to Assess Systematic Reviews-2 (AMSTAR-2) to appraise methodological quality. RESULTS Eighty-three systematic reviews met strict eligibility criteria. Most were from Europe and Latin America and reported on rehabilitative interventions. AMSTAR-2 appraisal found critically low confidence in 88% (n=73) because of multiple and varied deficiencies. Only 7% (n=6) had no AMSTAR-2 critical domain deficiency. The number of systematic reviews increased fivefold from 2015 to 2019; however, quality did not improve over time. INTERPRETATION Most of these systematic reviews are considered unreliable according to AMSTAR-2. Current recommendations for treating children with CP based on these flawed systematic reviews need re-evaluation. Findings are comparable to reports from other areas of medicine, despite the general perception that systematic reviews are high-level evidence. The required use of current widely accepted guidance for conducting and reporting systematic reviews by authors, peer reviewers, and editors is critical to ensure reliable, unbiased, and transparent systematic reviews. What this paper adds Confidence was critically low in the conclusions of 88% of systematic reviews about interventions for children with cerebral palsy (CP). Quality issues in the sample were not limited to systematic reviews of non-randomized trials, or to those about certain populations of CP or interventions. The inclusion of meta-analysis did not improve the level of confidence in these systematic reviews. Numbers of systematic reviews on this topic increased over the 5 search years but their methodological quality did not improve.
Collapse
Affiliation(s)
- Kat Kolaski
- Department of Orthopedics, Wake Forest University, Winston-Salem, NC, USA.,Department of Pediatrics, Wake Forest University, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - Katherine D Goss
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | | |
Collapse
|
10
|
Romeiser Logan L. Can a systematic review of case reports be useful? Quality matters. Dev Med Child Neurol 2021; 63:1135. [PMID: 34101830 DOI: 10.1111/dmcn.14952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Affiliation(s)
- Lynne Romeiser Logan
- Physical Medicine and Rehabilitation, Upstate Medical University, Syracuse, NY, USA
| |
Collapse
|
11
|
A new tool to assess Clinical Diversity In Meta‐analyses (CDIM) of interventions. J Clin Epidemiol 2021; 135:29-41. [DOI: 10.1016/j.jclinepi.2021.01.023] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2019] [Revised: 12/18/2020] [Accepted: 01/08/2021] [Indexed: 12/12/2022]
|
12
|
Apparently conclusive meta-analyses on interventions in critical care may be inconclusive-a meta-epidemiological study. J Clin Epidemiol 2019; 114:1-10. [PMID: 31200004 DOI: 10.1016/j.jclinepi.2019.05.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 05/08/2019] [Accepted: 05/18/2019] [Indexed: 11/21/2022]
Abstract
OBJECTIVES Risks of random type I and II errors are associated with false positive and false negative findings. In conventional meta-analyses, the risks of random errors are insufficiently evaluated. Many meta-analyses, which appear conclusive, might, in fact, be inconclusive because of risks of random errors. We hypothesize that, for interventions in critical care, false positive and false negative findings frequently become apparent when accounting for the risks of random error. We aim to investigate to which extent apparently conclusive conventional meta-analyses remain conclusive when adjusting statistical significance levels and confidence intervals considering sparse data and repeated testing through Trial Sequential Analysis (TSA). METHODS We searched The Cochrane Library, MEDLINE, and EMBASE for reviews of interventions in critical care. We used TSA with the relative risk reduction from the estimated meta-analyzed intervention effects adjusted for heterogeneity based on the observed diversity. We report proportions of meta-analyses and potential inconclusive findings of positive, neutral, and negative conclusions based on conventional naïve meta-analyses, which use an alpha of 5% and 95% confidence intervals. In TSA-controlled meta-analyses showing a beneficial or harmful intervention effect, we assessed the risk of bias by six Cochrane domains. RESULTS A total of 464 reviews containing 1,080 meta-analyses of (co-)primary outcomes were analyzed. From the 266 conventional meta-analyses suggesting a beneficial effect, 133 (50%) were true positive and 133 (50%) were potentially false positive according to TSA. From the 755 conventional meta-analyses suggesting a neutral effect, there were 214 (28%) true neutral and 541 (72%) were potentially false neutral according to TSA. From the 59 conventional meta-analyses suggesting a harmful effect, 17 (29%) were true negative and 42 (71%) were potentially false negative according to TSA. When the true beneficial and true harmful meta-analyses according to TSA were evaluated for risk of bias, new TSAs conducted on only trials with overall low risk of bias showed only firm evidence of a beneficial effect on one outcome and a harmful effect on one outcome. CONCLUSIONS Of all meta-analyses in critical care, a large proportion may reach false conclusions because of unknown risks of random type I or type II errors. Future critical care meta-analyses should aim for establishing an effect of interventions accounting for risks of bias and random errors.
Collapse
|
13
|
Butler E, Granholm A, Aneman A. Trustworthy systematic reviews-Can journals do more? Acta Anaesthesiol Scand 2019; 63:558-559. [PMID: 30676644 DOI: 10.1111/aas.13330] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Affiliation(s)
- Ethan Butler
- University of New South Wales, Sydney, New South Wales, Australia
| | - Anders Granholm
- Department of Intensive Care 4131, Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark
| | - Anders Aneman
- Intensive Care Unit, Liverpool Hospital, Liverpool, New South Wales, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia
- Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia
| |
Collapse
|
14
|
Innovation and safety in critical care: should we collaborate with the industry? Con. Intensive Care Med 2018; 44:2279-2281. [PMID: 30406807 DOI: 10.1007/s00134-018-5295-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 06/29/2018] [Indexed: 10/27/2022]
|
15
|
Afshari A, Møller MH. Broken science and the failure of academics-resignation or reaction? Acta Anaesthesiol Scand 2018; 62:1038-1040. [PMID: 29943406 DOI: 10.1111/aas.13167] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Affiliation(s)
- A Afshari
- Department of Anesthesia, Mother and Children's Centre, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| | - M H Møller
- Department of Intensive Care 4131, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|