1
|
Amen J, Kamel SA, El-Sobky TA. Low prevalence of spin in conclusions of interventional pediatric orthopedic studies. JOURNAL OF MUSCULOSKELETAL SURGERY AND RESEARCH 2024; 8:326-334. [DOI: 10.25259/jmsr_249_2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
Abstract
Interpretation bias as a factor of research reporting quality has not been thoroughly investigated in the conclusions of pediatric orthopedic publications. Our objective was to investigate the prevalence, subtypes, and severity of research reporting bias or spin of the conclusions in full-texts and abstracts of published studies investigating the effects of treatment/intervention in the pediatric orthopedic literature. We systematically searched ten high-ranking orthopedic journals on MEDLINE/PubMed. Inclusion criteria were pediatric orthopedic studies investigating the effects of treatment/intervention. We used descriptive statistics to report the prevalence, subtype, and severity of reporting bias in the studies’ conclusions according to validated classification criteria. We checked the results to ensure that data were neither misreported nor misinterpreted/extrapolated in the conclusions of the full-texts and their abstracts. Out of 93 included studies in the final analysis, 17 (18%) had at least one count of bias. Nine (10%) studies had bias in both full-text and the corresponding abstract conclusions. In four (4%) studies, bias was restricted to conclusions of abstracts only, and in four (4%) studies was restricted to a bias criterion assigned to the classification criteria of conclusions of full-text only and not the abstract. We analyzed 2511 spin/bias items across 93 studies and reported 30 (1%) counts of bias in conclusions of full-text and/or abstracts. The intervention was surgical in (71%) of studies. Interventional pediatric orthopedic studies published in high-ranking journals showed a low prevalence of reporting bias, namely, misleading reporting, misinterpretation, and inadequate extrapolation of conclusions. A comparative analysis with lower-ranking journals as a control group may reveal if our favorable results are an attribute of journal rank/quality. In general, editorial policies should emphasize skilled interpretation and extrapolation of research results.
Collapse
Affiliation(s)
- John Amen
- Department of Orthopedic Surgery, Faculty of Medicine, Ain Shams University, Cairo, Egypt,
| | - Sherif Ahmed Kamel
- Department of Orthopedic Surgery, Faculty of Medicine, Ain Shams University, Cairo, Egypt,
| | - Tamer A. El-Sobky
- Department of Orthopedic Surgery, Faculty of Medicine, Ain Shams University, Cairo, Egypt,
| |
Collapse
|
2
|
Kolaski K, Clarke M, Rathnayake D, Romeiser Logan L. Analysis of risk of bias assessments in a sample of intervention systematic reviews, part I: many aspects of conduct and reporting need improvement. J Clin Epidemiol 2024; 174:111480. [PMID: 39047919 DOI: 10.1016/j.jclinepi.2024.111480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 06/24/2024] [Accepted: 07/17/2024] [Indexed: 07/27/2024]
Abstract
OBJECTIVES Current standards for systematic reviews (SRs) require adequate conduct and complete reporting of risk of bias (RoB) assessments of the individual studies included in the review. We investigated the conduct and reporting of RoB assessments reported in a sample of SRs of interventions for persons with cerebral palsy (CP). STUDY DESIGN AND SETTING We included SRs published from 2014 to 2021. Authors worked in pairs to independently extract data on the characteristics of the SRs and to rate their conduct and reporting. The conduct of RoB assessment was appraised with the three AMSTAR-2 items related to RoB assessment. Reporting completeness was evaluated using the two items related to RoB assessment within studies in the PRISMA 2020 guidelines. We use descriptive statistics to report the consensus data, in accordance with our protocol. RESULTS We included 145 SRs. Among the 128 (88.3%) SRs that assessed RoB, the standards for AMSTAR-2 item 9 (use of an adequate RoB tool) were partially or fully satisfied in 73 (57.0%). Across the 128 SRs that assessed RoB, 46 (35.9%) accounted for RoB in interpreting the SR's findings and, of the 49 that included a meta-analysis, 11 (22.4%) discussed the impact of RoB on this. 123 (96.1%) of the 128 SRs named the RoB tool that was used for at least one of the study designs they included, 96 (75.0%) specified the RoB items assessed and 89 (69.5%) reported the findings for each item, 81 (63.2%) fully reported the processes for RoB assessment, 68 (53.1%) reported how an overall RoB judgment was reached, and 74 (57.8%) reported an overall RoB assessment for every study. CONCLUSION The selection and application of RoB tools in this sample of SRs about interventions for CP are comparable to those reported in other recent studies. However, most SRs in this sample did not fully meet the appraisal standards of AMSTAR-2 regarding the adequacy of the RoB tool applied and other aspects of RoB assessment conduct; Cochrane SRs were a notable exception. Overall, reporting of RoB assessments was somewhat better than conduct, perhaps reflecting the more widespread uptake of the PRISMA guidelines. Our findings may be generalizable to some extent, considering the extensive literature reporting widespread inadequacies in health care-related intervention SRs and reports from other specialties that document similar RoB assessment deficiencies. As such, this study should remind authors, peer reviewers, and journal editors to follow the RoB assessment reporting guidelines of PRISMA 2020 and to understand the corresponding critical appraisal standards of AMSTAR-2. We recommend a shift of focus from the documentation of inadequate RoB assessments and well-known deficiencies in other components of SRs towards the implementation of changes to address these problems along with plans to evaluate their effectiveness.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery and Rehabilitation, Neurology, Pediatrics, and Epidemiology and Prevention, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Mike Clarke
- Director of Northern Ireland Methodology Hub, School of Medicine, Dentistry and Biomedical Sciences, Queen's University Belfast, Belfast, UK
| | - Dimuthu Rathnayake
- School of Nursing and Midwifery, University College Dublin, Dublin, Ireland
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| |
Collapse
|
3
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Br J Pharmacol 2024; 181:180-210. [PMID: 37282770 DOI: 10.1111/bph.16100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/08/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
4
|
Sewell KA, Schellinger J, Bloss JE. Effect of PRISMA 2009 on reporting quality in systematic reviews and meta-analyses in high-impact dental medicine journals between 1993-2018. PLoS One 2023; 18:e0295864. [PMID: 38096136 PMCID: PMC10721095 DOI: 10.1371/journal.pone.0295864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023] Open
Abstract
INTRODUCTION The PRISMA guidelines were published in 2009 to address inadequate reporting of key methodological details in systematic reviews and meta-analyses (SRs/MAs). This study sought to assess the impact of PRISMA on the quality of reporting in the full text of dental medicine journals. METHODS This study assessed the impact of PRISMA (2009) on thirteen methodological details in SRs/MAs published in the highest-impact dental medicine journals between 1993-2009 (n = 211) and 2012-2018 (n = 618). The study further examined the rate of described use of PRISMA in the abstract or full text of included studies published post- PRISMA and the impact of described use of PRISMA on level of reporting. This study also examined potential effects of inclusion of PRISMA in Instructions for Authors, along with study team characteristics. RESULTS The number of items reported in SRs/MAs increased following the publication of PRISMA (pre-PRISMA: M = 7.83, SD = 3.267; post-PRISMA: M = 10.55, SD = 1.4). Post-PRISMA, authors rarely mention PRISMA in abstracts (8.9%) and describe the use of PRISMA in the full text in 59.87% of SRs/MAs. The described use of PRISMA within the full text indicates that its intent (guidance for reporting) is not well understood, with over a third of SRs/MAs (35.6%) describing PRISMA as guiding the conduct of the review. However, any described use of PRISMA was associated with improved reporting. Among author team characteristics examined, only author team size had a positive relationship with improved reporting. CONCLUSION Following the 2009 publication of PRISMA, the level of reporting of key methodological details improved for systematic reviews/meta-analyses published in the highest-impact dental medicine journals. The positive relationship between reference to PRISMA in the full text and level of reporting provides further evidence of the impact of PRISMA on improving transparent reporting in dental medicine SRs/MAs.
Collapse
Affiliation(s)
- Kerry A. Sewell
- William E. Laupus Health Sciences Library, East Carolina University, Greenville, North Carolina, United States of America
| | - Jana Schellinger
- Center for Evidence-Based Policy, Oregon Health & Science University, Portland, Oregon, United States of America
| | - Jamie E. Bloss
- William E. Laupus Health Sciences Library, East Carolina University, Greenville, North Carolina, United States of America
| |
Collapse
|
5
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Acta Anaesthesiol Scand 2023; 67:1148-1177. [PMID: 37288997 DOI: 10.1111/aas.14295] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 06/09/2023]
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
6
|
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P.A. Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
7
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. BMC Infect Dis 2023; 23:383. [PMID: 37286949 DOI: 10.1186/s12879-023-08304-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 05/03/2023] [Indexed: 06/09/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
8
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to best tools and practices for systematic reviews. Syst Rev 2023; 12:96. [PMID: 37291658 DOI: 10.1186/s13643-023-02255-9] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 02/19/2023] [Indexed: 06/10/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
9
|
Kolaski K, Logan LR, Ioannidis JPA. Guidance to Best Tools and Practices for Systematic Reviews. JBJS Rev 2023; 11:01874474-202306000-00009. [PMID: 37285444 DOI: 10.2106/jbjs.rvw.23.00077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
» Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.» A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.» Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, New York
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, California
| |
Collapse
|
10
|
Robert C, Wilson CS. Thirty-year survey of bibliometrics used in the research literature of pain: Analysis, evolution, and pitfalls. FRONTIERS IN PAIN RESEARCH 2023; 4:1071453. [PMID: 36937565 PMCID: PMC10017016 DOI: 10.3389/fpain.2023.1071453] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 02/08/2023] [Indexed: 03/05/2023] Open
Abstract
During the last decades, the emergence of Bibliometrics and the progress in Pain research have led to a proliferation of bibliometric studies on the medical and scientific literature of pain (B/P). This study charts the evolution of the B/P literature published during the last 30 years. Using various searching techniques, 189 B/P studies published from 1993 to August 2022 were collected for analysis-half were published since 2018. Most of the selected B/P publications use classic bibliometric analysis of Pain in toto, while some focus on specific types of Pain with Headache/Migraine, Low Back Pain, Chronic Pain, and Cancer Pain dominating. Each study is characterized by the origin (geographical, economical, institutional, …) and the medical/scientific context over a specified time span to provide a detailed landscape of the Pain research literature. Some B/P studies have been developed to pinpoint difficulties in appropriately identifying the Pain literature or to highlight some general publishing pitfalls. Having observed that most of the recent B/P studies have integrated newly emergent software visualization tools (SVTs), we found an increase of anomalies and suggest that readers exercise caution when interpreting results in the B/P literature details.
Collapse
Affiliation(s)
| | - Concepción Shimizu Wilson
- School of Information Systems, Technology and Management, University of New South Wales, UNSW Sydney, Sydney, NSW, Australia
| |
Collapse
|
11
|
Kolaski K, Romeiser Logan L, Ioannidis JPA. Guidance to best tools and practices for systematic reviews1. J Pediatr Rehabil Med 2023; 16:241-273. [PMID: 37302044 DOI: 10.3233/prm-230019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/12/2023] Open
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.
Collapse
Affiliation(s)
- Kat Kolaski
- Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Lynne Romeiser Logan
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY, USA
| | - John P A Ioannidis
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
12
|
Helliwell JA, Thompson J, Smart N, Jayne DG, Chapman SJ. Duplication and nonregistration of COVID‐19 systematic reviews: Bibliometric review. Health Sci Rep 2022; 5:e541. [PMID: 35509384 PMCID: PMC9059200 DOI: 10.1002/hsr2.541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 02/10/2022] [Accepted: 02/13/2022] [Indexed: 12/15/2022] Open
Abstract
Objectives This study examines the conduct of systematic reviews during the early stages of the COVID‐19 pandemic, including compliance to protocol registration and duplication of reviews on similar topics. The methodological and reporting quality were also explored. Methods A cross‐sectional, bibliometric study was undertaken of all systematic review manuscripts on a COVID‐19 intervention published between January 1st and June 30th, 2020. Protocol registration on a publicly accessible database was recorded. Duplication was determined by systematically recording the number of reviews published on each topic of analysis. Methodological quality and reporting quality were assessed using the AMSTAR‐2 and PRISMA 2009 instruments, respectively. Results Thirty‐one eligible systematic reviews were identified during the inclusion period. The protocol of only four (12.9%) studies was registered on a publicly accessible database. Duplication was frequent, with 15 (48.4%) of the 31 included studies focusing on either hydroxychloroquine (and/or chloroquine) or corticosteroids. Only one study (3.2%) was of “high” methodological quality, four (12.9%) were “low” quality, and the remainder (n = 26, 83.9%) were of “critically low” quality. The median completeness of reporting was 20 out of 27 items (74.1%) with a range of 5–26 (interquartile range: 14–23). Conclusion Systematic reviews during the early stages of the COVID‐19 pandemic were uncommonly registered, frequently duplicated, and mostly of low methodological quality. In contrast, the reporting quality of manuscripts was generally good but varied substantially across published reports. There is a need for heightened stewardship of systematic review research, particularly during times of medical crisis where the generation of primary evidence may be rapid and unstable.
Collapse
Affiliation(s)
- Jack A. Helliwell
- Leeds Institute of Medical Research at St. James's University of Leeds Leeds UK
| | - Joe Thompson
- Leeds Institute of Medical Research at St. James's University of Leeds Leeds UK
| | - Neil Smart
- Department of General Surgery Royal Devon and Exeter Hospital Exeter UK
| | - David G. Jayne
- Leeds Institute of Medical Research at St. James's University of Leeds Leeds UK
| | - Stephen J. Chapman
- Leeds Institute of Medical Research at St. James's University of Leeds Leeds UK
| |
Collapse
|
13
|
de Kock S, Stirk L, Ross J, Duffy S, Noake C, Misso K. Systematic review search methods evaluated using the Preferred Reporting of Items for Systematic Reviews and Meta-Analyses and the Risk Of Bias In Systematic reviews tool. Int J Technol Assess Health Care 2020; 37:e18. [PMID: 33280626 DOI: 10.1017/s0266462320002135] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
OBJECTIVES To evaluate the methodological and reporting characteristics of search methods of systematic reviews (SRs) using the Preferred Reporting of Systematic Reviews and Meta-Analyses (PRISMA) checklist and the Risk Of Bias In Systematic reviews (ROBIS) tool. METHODS A sample of 505 SRs published in 2016 was taken from KSR Evidence, a database of SRs, and analyzed to assess compliance with Information sources and Search of the PRISMA checklist. Domain 2 (D2) (Identification and Selection of Studies) of the ROBIS tool was used to judge the risk of bias in search methods. RESULTS Regarding Information sources and Search of PRISMA, twenty percent of SRs which claimed to be PRISMA-compliant in their methods, were compliant; twenty-four percent of SRs published in journals that require PRISMA reporting were compliant; nineteen percent in total were found to be compliant. Twenty-eight percent of SRs were judged to be at a low risk of bias in D2 and so searched widely with an effective strategy and, finally, ten percent were both compliant with the reporting of Information sources and with Search of PRISMA and were judged to be at a low risk of bias in D2. CONCLUSIONS Ninety percent of SRs are failing to report search methods adequately and to conduct comprehensive searches using a wide range of resources. Editors of journals and peer reviewers need to ensure that they understand the requirements of PRISMA and that compliance is adhered to. Additionally, the comprehensiveness of search methods for SRs needs to be given more critical consideration.
Collapse
Affiliation(s)
| | - Lisa Stirk
- Kleijnen Systematic Reviews Ltd, York, UK
| | | | | | - Caro Noake
- Kleijnen Systematic Reviews Ltd, York, UK
| | - Kate Misso
- Kleijnen Systematic Reviews Ltd, York, UK
| |
Collapse
|
14
|
Mbuagbaw L, Lawson DO, Puljak L, Allison DB, Thabane L. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 2020; 20:226. [PMID: 32894052 PMCID: PMC7487909 DOI: 10.1186/s12874-020-01107-7] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 08/27/2020] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. MAIN BODY We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a "frequently asked questions" format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies? CONCLUSION Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.
Collapse
Affiliation(s)
- Lawrence Mbuagbaw
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada.
- Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph's Healthcare-Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada.
- Centre for the Development of Best Practices in Health, Yaoundé, Cameroon.
| | - Daeria O Lawson
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada
| | - Livia Puljak
- Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia
| | - David B Allison
- Department of Epidemiology and Biostatistics, School of Public Health - Bloomington, Indiana University, Bloomington, IN, 47405, USA
| | - Lehana Thabane
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada
- Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph's Healthcare-Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada
- Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada
- Centre for Evaluation of Medicine, St. Joseph's Healthcare-Hamilton, Hamilton, ON, Canada
- Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada
| |
Collapse
|
15
|
Statistical analyses and quality of individual participant data network meta-analyses were suboptimal: a cross-sectional study. BMC Med 2020; 18:120. [PMID: 32475340 PMCID: PMC7262764 DOI: 10.1186/s12916-020-01591-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 04/14/2020] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Network meta-analyses using individual participant data (IPD-NMAs) have been increasingly used to compare the effects of multiple interventions. Although there have been many studies on statistical methods for IPD-NMAs, it is unclear whether there are statistical defects in published IPD-NMAs and whether the reporting of statistical analyses has improved. This study aimed to investigate statistical methods used and assess the reporting and methodological quality of IPD-NMAs. METHODS We searched four bibliographic databases to identify published IPD-NMAs. The methodological quality was assessed using AMSTAR-2 and reporting quality assessed based on PRISMA-IPD and PRISMA-NMA. We performed stratified analyses and correlation analyses to explore the factors that might affect quality. RESULTS We identified 21 IPD-NMAs. Only 23.8% of the included IPD-NMAs reported statistical techniques used for missing participant data, 42.9% assessed the consistency, and none assessed the transitivity. None of the included IPD-NMAs reported sources of funding for trials included, only 9.5% stated pre-registration of protocols, and 28.6% assessed the risk of bias in individual studies. For reporting quality, compliance rates were lower than 50.0% for more than half of the items. Less than 15.0% of the IPD-NMAs reported data integrity, presented the network geometry, or clarified risk of bias across studies. IPD-NMAs with statistical or epidemiological authors often better assessed the inconsistency (P = 0.017). IPD-NMAs with a priori protocol were associated with higher reporting quality in terms of search (P = 0.046), data collection process (P = 0.031), and syntheses of results (P = 0.006). CONCLUSIONS The reporting of statistical methods and compliance rates of methodological and reporting items of IPD-NMAs were suboptimal. Authors of future IPD-NMAs should address the identified flaws and strictly adhere to methodological and reporting guidelines.
Collapse
|
16
|
Is Percutaneous Adhesiolysis Effective in Managing Chronic Low Back and Lower Extremity Pain in Post-surgery Syndrome: a Systematic Review and Meta-analysis. Curr Pain Headache Rep 2020; 24:30. [PMID: 32468418 DOI: 10.1007/s11916-020-00862-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
PURPOSE OF REVIEW The growing prevalence of spinal pain in the USA continues to produce substantial economic impact and strain on health-related quality of life. Percutaneous adhesiolysis is utilized for recalcitrant, resistant conditions involving spinal pain when epidural injections have failed to provide adequate improvement, especially low back and lower extremity pain, specifically in post-lumbar surgery syndrome. Despite multiple publications and systematic reviews, the debate continues in reference to effectiveness, safety, appropriate utilization, and medical necessity of percutaneous adhesiolysis in chronic pain. This systematic review, therefore, was undertaken to evaluate and to update effectiveness of percutaneous adhesiolysis to treat chronic refractory low back and lower extremity pain, post-surgical patients of the lumbar spine. RECENT FINDINGS From 2009 to 2016, there was a decline of 53.2% utilization of percutaneous adhesiolysis with an annual decline of 10.3% per 100,000 fee-for-service (FFS) Medicare population. Multiple insurers, including Medicare, with Medicare area contractors of Noridian and Palmetto have issued noncoverage policies for percutaneous adhesiolysis resulting in these steep declines and continued noncoverage by Medicare Advantage plans, Managed Care plans of Medicaid, and other insurers. Since 2005, 4 systematic reviews of percutaneous adhesiolysis were published with 3 of them showing proper methodology and appropriate results with effectiveness of adhesiolysis, whereas one poorly performed systematic review showed negative results. In addition, there were only 4 randomized controlled trials (RCTs) to be included in the previous systematic reviews of post-surgery syndrome, whereas now, the RCTs and other studies have increased. This systematic review shows level I or strong evidence for the effectiveness of percutaneous adhesiolysis in managing chronic low back and lower extremity pain related to post-lumbar surgery syndrome.
Collapse
|
17
|
Marušić MF, Fidahić M, Cepeha CM, Farcaș LG, Tseke A, Puljak L. Methodological tools and sensitivity analysis for assessing quality or risk of bias used in systematic reviews published in the high-impact anesthesiology journals. BMC Med Res Methodol 2020; 20:121. [PMID: 32423382 PMCID: PMC7236513 DOI: 10.1186/s12874-020-00966-4] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 04/01/2020] [Indexed: 12/19/2022] Open
Abstract
Background A crucial element in the systematic review (SR) methodology is the appraisal of included primary studies, using tools for assessment of methodological quality or risk of bias (RoB). SR authors can conduct sensitivity analyses to explore whether their results are sensitive to exclusion of low quality studies or a high RoB. However, it is unknown which tools do SR authors use for assessing quality/RoB, and how they set threshold for quality/RoB in sensitivity analyses. The aim of this study was to assess quality/RoB assessment tools, the types of sensitivity analyses and quality/RoB thresholds for sensitivity analyses used within SRs published in high-impact pain/anesthesiology journals. Methods This was a methodological study. We analyzed SRs published from January 2005 to June 2018 in the 25% highest-ranking journals within the Journal Citation Reports (JCR) “Anesthesiology” category. We retrieved the SRs from PubMed. Two authors independently screened records, full texts, and extracted data on quality/RoB tools and sensitivity analyses. We extracted data about quality/RoB tools, types of sensitivity analyses and the thresholds for quality/RoB used in them. Results Out of 678 analyzed SRs, 513 (76%) reported the use of quality/RoB assessments. The most commonly reported tools for assessing quality/RoB in the studies were the Cochrane tool for risk of bias assessment (N = 251; 37%) and Jadad scale (N = 99; 15%). Meta-analysis was conducted in 451 (66%) of SRs and sensitivity analysis in 219/451 (49%). Most commonly, sensitivity analysis was conducted to explore the influence of study quality/RoB (90/219; 41%) on the results. Quality/RoB thresholds used for sensitivity analysis for those studies were clearly reported in 47 (52%) articles that used them. The quality/RoB thresholds used for sensitivity analyses were highly heterogeneous and inconsistent, even when the same tool was used. Conclusions A quarter of SRs reported using quality/RoB assessments, and some of them cited tools that are not meant for assessing quality/RoB. Authors who use quality/RoB to explore the robustness of their results in meta-analyses use highly heterogeneous quality/RoB thresholds in sensitivity analyses. Better methodological consistency for quality/RoB sensitivity analyses is needed.
Collapse
Affiliation(s)
| | - Mahir Fidahić
- Medical Faculty, University of Tuzla, Tuzla, Canton Tuzla, Bosnia and Herzegovina
| | | | | | - Alexandra Tseke
- Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Livia Puljak
- Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia.
| |
Collapse
|
18
|
Torp-Pedersen C, Goette A, Nielsen PB, Potpara T, Fauchier L, John Camm A, Arbelo E, Boriani G, Skjoeth F, Rumsfeld J, Masoudi F, Guo Y, Joung B, Refaat MM, Kim YH, Albert CM, Piccini J, Avezum A, Lip GYH. 'Real-world' observational studies in arrhythmia research: data sources, methodology, and interpretation. A position document from European Heart Rhythm Association (EHRA), endorsed by Heart Rhythm Society (HRS), Asia-Pacific HRS (APHRS), and Latin America HRS (LAHRS). Europace 2020; 22:831-832. [PMID: 31725156 DOI: 10.1093/europace/euz210] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 07/24/2019] [Indexed: 02/07/2023] Open
Abstract
The field of observational studies or "real world studies" is in rapid development with many new techniques introduced and increased understanding of traditional methods. For this reason the current paper provides an overview of current methods with focus on new techniques. Some highlights can be emphasized: We provide an overview of sources of data for observational studies. There is an overview of sources of bias and confounding. Next There is an overview of causal inference techniques that are increasingly used. The most commonly used techniques for statistical modelling are reviewed with focus on the important distinction of risk versus prediction. The final section provides examples of common problems with reporting observational data.
Collapse
Affiliation(s)
| | | | | | - Tatjana Potpara
- School of Medicine, Belgrade University, Belgrade, Serbia
- Cardiology Clinic, Clinical Center of Serbia, Belgrade, Serbia
| | - Laurent Fauchier
- Service de Cardiologie, Centre Hospitalier Universitaire Trousseau et Université de Tours, Faculté de Médecine, Tours, France
| | - Alan John Camm
- St. George's, University of London, Molecular and Clinical Sciences Research Institute, St George's, University of London, London, UK
| | - Elena Arbelo
- Arrhythmia Section, Cardiology Department, Hospital Clínic, Universitat de Barcelona, Barcelona, Spain
- IDIBAPS, Institut d'Investigació August Pi i Sunyer (IDIBAPS), Barcelona, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Cardiovasculares (CIBERCV), Madrid, Spain
| | - Giuseppe Boriani
- Cardiology Division, Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Policlinico di Modena, Modena, Italy
| | - Flemming Skjoeth
- Aalborg University, Health Science and Technology, Aalborg, Denmark
| | - John Rumsfeld
- University of Colorado School of Medicine, Aurora, CO, USA
| | - Frederick Masoudi
- Division of Cardiology, Department of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Yutao Guo
- Cardiology, Chinese PLA General Hospital, Beijing, People's Republic of China
| | - Boyoung Joung
- Cardiology Department, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Marwan M Refaat
- Department of Internal Medicine, American University of Beirut Medical Center, Beirut, Lebanon
| | - Young-Hoon Kim
- Cardiology Department, Korea University Medical Center, Seoul, Republic of Korea
| | | | - Jonathan Piccini
- Duke Center for Atrial Fibrillation, Duke University Medical Center, Duke Clinical Research Institute, Durham, NC, USA
| | - Alvaro Avezum
- Dante Pazzanese Institute of Cardiology, Sao Paulo, Brazil
| | - Gregory Y H Lip
- Liverpool Centre for Cardiovascular Science, University of Liverpool, Liverpool Heart & Chest Hospital, Liverpool, UK
- Aalborg Thrombosis Research Unit, Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| |
Collapse
|
19
|
Li L, Deng K, Busse JW, Zhou X, Xu C, Liu Z, Ren Y, Zou K, Sun X. A systematic survey showed important limitations in the methods for assessing drug safety among systematic reviews. J Clin Epidemiol 2020; 123:80-90. [PMID: 32247024 DOI: 10.1016/j.jclinepi.2020.03.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 03/10/2020] [Accepted: 03/25/2020] [Indexed: 02/05/2023]
Abstract
OBJECTIVES This study aimed to examine the design, conduct, and analysis of systematic reviews assessing drug safety through a cross-sectional survey. STUDY DESIGN AND SETTING We searched PubMed to identity systematic reviews published in the Cochrane Database of Systematic Reviews and Core Clinical Journals indexed in 2015 and randomly sampled systematic reviews assessing drug effects at a 1:1 ratio of Cochrane and non-Cochrane reviews. Teams of two investigators independently conducted study screening and collected data, using prespecified, standardized questionnaires. In addition to general information, we collected details about the planning and analyses of safety outcomes. RESULTS We included 120 systematic reviews, including 60 Cochrane and 60 non-Cochrane reviews. Most reviews searched PubMed/MEDLINE (n = 117, 97.5%), EMBASE (n = 105, 87.5%), and Cochrane CENTRAL (n = 110, 91.7%) and conducted independent and duplicate study selection (n = 98, 81.7%), risk of bias assessment (n = 105, 87.5%), and data collection (n = 105, 87.5%). Only nine (7.5%) reviews clearly defined safety outcomes, and seven (5.8%) defined a primary safety outcome; none stated whether the primary safety outcome was predefined. Among the 80 reviews that pooled the primary dichotomous safety data across studies, less than half (41%, n = 33) conducted subgroup analysis to explore for sources of heterogeneity or reported a GRADE assessment for the overall quality of evidence. Cochrane reviews were more likely to provide a study protocol (100% vs. 23.3%; P < 0.001), involve methodologists (53.3% vs. 20.0%; P < 0.001), and report a GRADE assessment for the primary safety outcome (70.6% vs. 19.6%; P < 0.001). CONCLUSION Our findings highlighted areas for improved planning and analysis in the assessment of drug safety among systematic reviews. Cochrane reviews were superior to non-Cochrane reviews; however, most reviews did not prespecify their safety outcomes or methods for analysis, explore sources of heterogeneity among pooled effects, or assess the overall quality of evidence with the GRADE approach.
Collapse
Affiliation(s)
- Ling Li
- Chinese Evidence-Based Medicine Center, Cochrane China Center and National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Ke Deng
- Chinese Evidence-Based Medicine Center, Cochrane China Center and National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Jason W Busse
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON L8S 4K1, Canada; Department of Anesthesia, McMaster University, Hamilton, Ontario L8S 4K1, Canada; The Michael G. DeGroote Institute for Pain Research and Care, McMaster University, Hamilton, Ontario L8S 4K1, Canada; The Michael G. DeGroote Centre for Medicinal Cannabis Research, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Xu Zhou
- Evidence-Based Medicine Research Center, School of Basic Science, Jiangxi University of Traditional Chinese Medicine, Nanchang 330004, Jiangxi, China
| | - Chang Xu
- Chinese Evidence-Based Medicine Center, Cochrane China Center and National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Zhibin Liu
- Chinese Evidence-Based Medicine Center, Cochrane China Center and National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Yan Ren
- Chinese Evidence-Based Medicine Center, Cochrane China Center and National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Kang Zou
- Chinese Evidence-Based Medicine Center, Cochrane China Center and National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Xin Sun
- Chinese Evidence-Based Medicine Center, Cochrane China Center and National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China; Evidence-Based Medicine Research Center, School of Basic Science, Jiangxi University of Traditional Chinese Medicine, Nanchang 330004, Jiangxi, China.
| |
Collapse
|
20
|
Gao Y, Cai Y, Yang K, Liu M, Shi S, Chen J, Sun Y, Song F, Zhang J, Tian J. Methodological and reporting quality in non-Cochrane systematic review updates could be improved: a comparative study. J Clin Epidemiol 2020; 119:36-46. [PMID: 31759063 DOI: 10.1016/j.jclinepi.2019.11.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2019] [Revised: 11/12/2019] [Accepted: 11/18/2019] [Indexed: 12/13/2022]
Abstract
OBJECTIVES The aim of the study was to compare the methodological and reporting quality of updated systematic reviews (SRs) and original SRs. STUDY DESIGN AND SETTING We included 30 pairs of non-Cochrane updated and original SRs, identified from a search of PubMed and Embase.com. We used Assessment of Multiple Systematic Reviews-2 (AMSTAR-2) to assess methodological quality and Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) for reporting quality. Stratified analyses were conducted to compare the differences between updated SRs and original SRs and explore factors that might affect the degree of quality change. RESULTS Of the 60 non-Cochrane SRs, only two (3.3%) were of low quality, the remaining 58 (96.7%) were of critical low quality. There were no statistically significant differences in methodological quality between the updated SRs and original SRs, although the compliance rates of eight items of updated SRs were higher than that of original SRs. Updated SRs showed an improvement on 15 PRISMA items, but no items with statistically significant differences. The differences in fully reported AMSTAR-2 and PRISMA items between original SRs and updated SRs were also not statistically significant after adjusting for multiple review characteristics. CONCLUSION The methodological and reporting quality of updated SRs were not improved compared with original SRs, although the quality could be further improved for both updated SRs and original SRs.
Collapse
Affiliation(s)
- Ya Gao
- Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou, China
| | - Yitong Cai
- Evidence-Based Nursing Center, School of Nursing, Lanzhou University, Lanzhou, China
| | - Kelu Yang
- Evidence-Based Nursing Center, School of Nursing, Lanzhou University, Lanzhou, China
| | - Ming Liu
- Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou, China
| | - Shuzhen Shi
- Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou, China
| | - Ji Chen
- Evidence-Based Nursing Center, School of Nursing, Lanzhou University, Lanzhou, China
| | - Yue Sun
- Evidence-Based Nursing Center, School of Nursing, Lanzhou University, Lanzhou, China
| | - Fujian Song
- Public Health and Health Services Research, Norwich Medical School, University of East Anglia, Norwich, UK
| | - Junhua Zhang
- Evidence-Based Medicine Center, Tianjin University of Traditional Chinese Medicine, Tianjin, China.
| | - Jinhui Tian
- Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou, China; Evidence-Based Nursing Center, School of Nursing, Lanzhou University, Lanzhou, China; Key Laboratory of Evidence-Based Medicine and Knowledge Translation of Gansu Province, Lanzhou, China.
| |
Collapse
|
21
|
Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Stud 2020; 6:13. [PMID: 32699641 PMCID: PMC7003412 DOI: 10.1186/s40814-019-0544-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2019] [Accepted: 12/20/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND A relatively novel method of appraisal, methodological reviews (MRs) are used to synthesize information on the methods used in health research. There are currently no guidelines available to inform the reporting of MRs. OBJECTIVES This pilot review aimed to determine the feasibility of a full review and the need for reporting guidance for methodological reviews. METHODS Search strategy: We conducted a search of PubMed, restricted to 2017 to include the most recently published studies, using different search terms often used to describe methodological reviews: "literature survey" OR "meta-epidemiologic* review" OR "meta-epidemiologic* survey" OR "methodologic* review" OR "methodologic* survey" OR "systematic survey."Data extraction: Study characteristics including country, nomenclature, number of included studies, search strategy, a priori protocol use, and sampling methods were extracted in duplicate and summarized.Outcomes: Primary feasibility outcomes were the sensitivity and specificity of the search terms (criteria for success of feasibility set at sensitivity and specificity of ≥ 70%).Analysis: The estimates are reported as a point estimate (95% confidence interval). RESULTS Two hundred thirty-six articles were retrieved and 31 were included in the final analysis. The most accurate search term was "meta-epidemiological" (sensitivity [Sn] 48.39; 95% CI 31.97-65.16; specificity [Sp] 97.56; 94.42-98.95). The majority of studies were published by authors from Canada (n = 12, 38.7%), and Japan and USA (n = 4, 12.9% each). The median (interquartile range [IQR]) number of included studies in the MRs was 77 (13-1127). Reporting of a search strategy was done in most studies (n = 23, 74.2%). The use of a pre-published protocol (n = 7, 22.6%) or a justifiable sampling method (n = 5, 16.1%) occurred rarely. CONCLUSIONS Using the MR nomenclature identified, it is feasible to build a comprehensive search strategy and conduct a full review. Given the variation in reporting practices and nomenclature attributed to MRs, there is a need for guidance on standardized and transparent reporting of MRs. Future guideline development would likely include stakeholders from Canada, USA, and Japan.
Collapse
Affiliation(s)
- Daeria O. Lawson
- Department of Health Research Methods, Evidence, and Impact, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4K1 Canada
| | - Alvin Leenus
- Faculty of Health Sciences, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4K1 Canada
| | - Lawrence Mbuagbaw
- Department of Health Research Methods, Evidence, and Impact, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4K1 Canada
- Biostatistics Unit, Father Sean O’Sullivan Research Centre, St. Joseph’s Healthcare Hamilton, Hamilton, ON L8N 4A6 Canada
| |
Collapse
|
22
|
Manchikanti L, Pampati V, Sanapati SP, Sanapati MR, Kaye AD, Hirsch JA. Evaluation of Cost-Utility of Thoracic Interlaminar Epidural Injections. Curr Pain Headache Rep 2020; 24:5. [PMID: 32002687 DOI: 10.1007/s11916-020-0838-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
PURPOSE OF REVIEW Chronic thoracic pain, even though not as prevalent as low back and neck pain, appears in approximately 30% of the general population. The severity of thoracic pain and degree of disability seems to be similar to other painful conditions. Despite this severity, interventions in managing chronic thoracic pain are less frequent, and there is a paucity of literature regarding epidural injections and facet joint interventions. RECENT FINDINGS As with lumbar and cervical spine, a multitude of interventions are offered in managing chronic thoracic pain, including interventional techniques with epidural injections and facet joint interventions. A single randomized controlled trial (RCT) has been published with a 2-year follow-up of clinical effectiveness of the results. However, there have not been any cost-utility analysis studies pertaining to either epidural injections or facet joint interventions in thoracic pain. Based on the results of the RCT, a cost-utility analysis of thoracic interlaminar epidural injections was undertaken. Evaluation of the cost-utility analysis of thoracic interlaminar epidural injections with or without steroids in managing thoracic disc herniation, thoracic spinal stenosis, and thoracic discogenic or axial pain was assessed in 110 patients with a 2-year follow-up. Direct payment data from 2018 was utilized for procedural costs and indirect costs. Costs, including drug costs, were determined by multiplication of direct procedural payment data by a factor of 1.67 or addition of 40% of cost to accommodate for indirect payments and arrive at overall costs. Cost-utility analysis showed direct procedural cost of USD $1943.19, whereas total estimated costs year per QALY were USD $3245.12.
Collapse
Affiliation(s)
- Laxmaiah Manchikanti
- Anesthesiology and Perioperative Medicine, University of Louisville, Louisville, KY, USA. .,Department of Anesthesiology, School of Medicine, LSU Health Sciences Center, New Orleans, LA, USA. .,Pain Management Centers of America, 2831 Lone Oak Road, Paducah, KY, 42003, USA. .,Pain Management Centers of America, 67 Lakeview Dr., Paducah, KY, 42001, USA.
| | - Vidyasagar Pampati
- Pain Management Centers of America, 67 Lakeview Dr., Paducah, KY, 42001, USA
| | | | - Mahendra R Sanapati
- Pain Management Centers of America, 1101 Professional Blvd Ste 100, Evansville, IN, 47714, USA
| | - Alan D Kaye
- Department of Anesthesiology and Pharmacology, Toxicology, and Neurosciences, Louisiana State University School of Medicine, Shreveport, LA, USA
| | - Joshua A Hirsch
- Neurointerventional Radiology, Neurointerventional Spine, Massachusetts General Hospital and Harvard Medical School, 55 Blossom Street, Gray 241B, Boston, MA, 02114, USA
| |
Collapse
|
23
|
Nascimento DP, Gonzalez GZ, Araujo AC, Costa LOP. Journal impact factor is associated with PRISMA endorsement, but not with the methodological quality of low back pain systematic reviews: a methodological review. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2019; 29:462-479. [DOI: 10.1007/s00586-019-06206-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 09/30/2019] [Accepted: 11/02/2019] [Indexed: 12/28/2022]
|
24
|
A Bibliometric Analysis of Publications on Oxycodone from 1998 to 2017. BIOMED RESEARCH INTERNATIONAL 2019; 2019:9096201. [PMID: 31781650 PMCID: PMC6875415 DOI: 10.1155/2019/9096201] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Revised: 07/11/2019] [Accepted: 08/07/2019] [Indexed: 02/03/2023]
Abstract
Background Oxycodone is a widely used opioid analgesic, which is involved in cancer pain and non-cancer pain. This study is intended to understand the publication characteristics of oxycodone research field and assess the quality of pertinent articles from 1998 to 2017. Methods Oxycodone-related publications from 1998 to 2017 were retrieved from the Web of Science (WOS) and PubMed database. These papers were coded across several categories, such as total number, journals, countries, institutions, authors and citations reports. And the analysis of co-occurrence keywords was handled by VOSviewer software. Results According to search strategies, a total of 2659 articles on oxycodone were published in world from 1998 to 2017 in WOS. Among the top 10 most productive organizations, six of them were American institutes, two of them were pharmaceutical enterprises and the other three were Finnish, Australian and Canadian institutes, which is similar with the distribution by country/region. Drewes AM from Denmark published most articles and PAIN MEDICINE is the most productive journal in oxycodone area. Meanwhile, clinical studies occupy a dominant position during the past 20 years. The 10 most cited papers were listed. Among these articles, 8 of them are reviews and 2 of those are meta-analysis. And the last decade (2008–2017) displayed that the newest keywords focus on “double-blind”, “randomized controlled trial” and “neuropathic pain”. Conclusions The findings provided a comprehensive overview of oxycodone research. In view of the adverse effects of oxycodone, high-quality oxycodone studies both in basic studies and clinical trials need to be completed.
Collapse
|
25
|
Saric L, Dosenovic S, Saldanha IJ, Jelicic Kadic A, Puljak L. Conference abstracts describing systematic reviews on pain were selectively published, not reliable, and poorly reported. J Clin Epidemiol 2019; 117:1-8. [PMID: 31533073 DOI: 10.1016/j.jclinepi.2019.09.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2019] [Revised: 08/22/2019] [Accepted: 09/10/2019] [Indexed: 01/31/2023]
Abstract
OBJECTIVE The objective of the study was to determine the reporting quality of systematic review (SR) abstracts presented at World Congresses on Pain (WCPs) and to quantify agreement in results presented in those abstracts with their corresponding full-length publications. STUDY DESIGN AND SETTING We screened abstracts of five WCPs held from 2008 to 2016 to find abstracts describing SRs. Two authors searched for corresponding full publications using PubMed and Google Scholar in April 2018. Methods and outcomes extracted from abstracts were compared with their corresponding full publications. The reporting quality of abstracts was evaluated against the PRISMA for Abstracts (PRISMA-A) checklist. RESULTS We identified 143 conference abstracts describing SRs. Of these, 90 (63%) were published as full-length articles in peer-reviewed journals by April 2018, with a median time from conference presentation to publication of 5 months (interquartile range: -0.25 to 14 months). Among 79 abstract-publication pairs evaluable for discordance, there was some form of discordance in 40% of pairs. Qualitative discordance (different direction of the effect) was found in 13 analyzed pairs (16%). The median adherence by abstracts to each PRISMA-A checklist item was 33% (interquartile range: 29% to 42%). CONCLUSION Conference abstracts of pain SRs are selectively published, not reliable, and poorly reported.
Collapse
Affiliation(s)
- Lenko Saric
- Department of Anesthesiology and Intensive Care Medicine, University Hospital Split, Split, Croatia
| | - Svjetlana Dosenovic
- Department of Anesthesiology and Intensive Care Medicine, University Hospital Split, Split, Croatia
| | - Ian J Saldanha
- Department of Health Services, Policy, and Practice, Center for Evidence Synthesis in Health, Brown University School of Public Health, Providence, Rhode Island, USA
| | | | - Livia Puljak
- Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Zagreb, Croatia.
| |
Collapse
|
26
|
Propadalo I, Tranfic M, Vuka I, Barcot O, Pericic TP, Puljak L. In Cochrane reviews, risk of bias assessments for allocation concealment were frequently not in line with Cochrane's Handbook guidance. J Clin Epidemiol 2019; 106:10-17. [DOI: 10.1016/j.jclinepi.2018.10.002] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 09/18/2018] [Accepted: 10/02/2018] [Indexed: 01/08/2023]
|
27
|
Belloti JC, Okamura A, Scheeren J, Faloppa F, Ynoe de Moraes V. A systematic review of the quality of distal radius systematic reviews: Methodology and reporting assessment. PLoS One 2019; 14:e0206895. [PMID: 30673700 PMCID: PMC6343870 DOI: 10.1371/journal.pone.0206895] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Accepted: 10/22/2018] [Indexed: 12/21/2022] Open
Abstract
Background Many systematic reviews (SRs) have been published about the various treatments for distal radius fractures (DRF). The heterogeneity of SRs results may come from the misuse of SR methods, and literature overviews have demonstrated that SRs should be considered with caution as they may not always be synonymous with high-quality standards. Our objective is to evaluate the quality of published SRs on the treatment of DRF through these tools. Methods The methods utilized in this review were previously published in the PROSPERO database. We considered SRs of surgical and nonsurgical interventions for acute DRF in adults. A comprehensive search strategy was performed in the MEDLINE database (inception to May 2017) and we manually searched the grey literature for non-indexed research. Data were independently extracted by two authors. We assessed SR internal validity and reporting using AMSTAR (Assessing the Methodological Quality of Systematic Reviews and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyzes). Scores were calculated as the sum of reported items. We also extracted article characteristics and provided Spearman’s correlation measurements. Results Forty-one articles fulfilled the eligibility criteria. The mean score for PRISMA was 15.90 (CI 95%, 13.9–17.89) and AMSTAR was 6.48 (CI 95% 5.72–7.23). SRs that considered only RCTs had better AMSTAR [7.56 (2.1) vs. 5.62 (2.3); p = 0.014] and PRISMA scores [18.61 (5.22) vs. 13.93 (6.47), p = 0.027]. The presence of meta-analysis on the SRs altered PRISMA scores [19.17 (4.75) vs. 10.21 (4.51), p = 0.001] and AMSTAR scores [7.68 (1.9) vs. 4.39 (1.66), p = 0.001]. Journal impact factor or declaration of conflict of interest did not change PRISMA and AMSTAR scores. We found substantial inter observer agreement for PRISMA (0.82, 95% CI 0.62–0.94; p = 0.01) and AMSTAR (0.65, 95% CI 0.43–0.81; p = 0.01), and moderate correlation between PRISMA and AMSTAR scores (0.83, 95% CI 0.62–0.92; p = 0.01). Conclusions DRF RCT-only SRs have better PRISMA and AMSTAR scores. These tools have substantial inter-observer agreement and moderate inter-tool correlation. We exposed the current research panorama and pointed out some factors that can contribute to improvements on the topic.
Collapse
Affiliation(s)
- João Carlos Belloti
- Department of Orthopedics and Traumatology, Division of Hand Surgery, Universidade Federal de São Paulo, Sao Paulo, Brazil
- Grupo cirurgia da mão e microcirurgia, Hospital Alvorada Moema, São Paulo, São Paulo, Brazil
| | - Aldo Okamura
- Department of Orthopedics and Traumatology, Division of Hand Surgery, Universidade Federal de São Paulo, Sao Paulo, Brazil
- Grupo cirurgia da mão e microcirurgia, Hospital Alvorada Moema, São Paulo, São Paulo, Brazil
| | - Jordana Scheeren
- Department of Orthopedics and Traumatology, Division of Hand Surgery, Universidade Federal de São Paulo, Sao Paulo, Brazil
| | - Flávio Faloppa
- Department of Orthopedics and Traumatology, Division of Hand Surgery, Universidade Federal de São Paulo, Sao Paulo, Brazil
- Grupo cirurgia da mão e microcirurgia, Hospital Alvorada Moema, São Paulo, São Paulo, Brazil
| | - Vinícius Ynoe de Moraes
- Department of Orthopedics and Traumatology, Division of Hand Surgery, Universidade Federal de São Paulo, Sao Paulo, Brazil
- Grupo cirurgia da mão e microcirurgia, Hospital Alvorada Moema, São Paulo, São Paulo, Brazil
- * E-mail:
| |
Collapse
|
28
|
Saletta JM, Garcia JJ, Caramês JMM, Schliephake H, da Silva Marques DN. Quality assessment of systematic reviews on vertical bone regeneration. Int J Oral Maxillofac Surg 2018; 48:364-372. [PMID: 30139710 DOI: 10.1016/j.ijom.2018.07.014] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Revised: 06/08/2018] [Accepted: 07/25/2018] [Indexed: 12/14/2022]
Abstract
The aim of this study was to evaluate and compare the quality of systematic reviews of vertical bone regeneration techniques, using two quality-assessment tools (AMSTAR and ROBIS). An electronic literature search was conducted to identify systematic reviews or meta-analyses that would evaluate at least one of the following outcomes: implant survival, success rates, complications or bone gain after vertical ridge augmentation. Methodological quality assessment was performed by two independent evaluators. Results were compared between reviewers, and reliability measures were calculated using the Holsti's method® and Cohen's kappa. Seventeen systematic reviews were included, of which seven presented meta-analysis. Mean ±95% confidence interval AMSTAR score was 6.35 [4.74;7.97], with higher scores being correlated with a smaller risk of bias (Pearson's correlation coefficient=-0.84; P<0.01). Cohen's inter-examiner kappa showed substantial agreement for both checklists. From the available evidence, we ascertained that, regardless of the technique used, it is possible to obtain vertical bone gains. Implant success in regenerated areas was similar to implants placed in pristine bone with results equating between 61.5% and 100% with guided bone regeneration being considered the most predictable technique regarding bone stability, while distraction osteogenesis achieved the biggest bone gains with the highest risk of possible complications.
Collapse
Affiliation(s)
- J M Saletta
- Implant Department, Universidad Europea de Madrid, Madrid, Spain
| | - J J Garcia
- Implant Department, Universidad Europea de Madrid, Madrid, Spain; CIRO, Madrid, Spain
| | - J M M Caramês
- Oral Surgery and Implant Department, Faculdade de Medicina Dentária, Universidade de Lisboa, Lisbon, Portugal; Implantology Institute, Lisbon, Portugal; LIBPhys-FCT UID/FIS/04559/2013, Faculdade de Medicina Dentária, Universidade de Lisboa, Lisbon, Portugal
| | - H Schliephake
- Department of Oral and Maxillofacial Surgery, University Medicine, George-Augusta-University, Göttingen, Germany
| | - D N da Silva Marques
- Implantology Institute, Lisbon, Portugal; LIBPhys-FCT UID/FIS/04559/2013, Faculdade de Medicina Dentária, Universidade de Lisboa, Lisbon, Portugal; Centro de Estudos de Medicina Dentária Baseada na Evidência, Faculdade de Medicina Dentária, Universidade de Lisboa, Lisbon, Portugal.
| |
Collapse
|
29
|
Dosenovic S, Jelicic Kadic A, Vucic K, Markovina N, Pieper D, Puljak L. Comparison of methodological quality rating of systematic reviews on neuropathic pain using AMSTAR and R-AMSTAR. BMC Med Res Methodol 2018; 18:37. [PMID: 29739339 PMCID: PMC5941595 DOI: 10.1186/s12874-018-0493-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2017] [Accepted: 04/16/2018] [Indexed: 12/11/2022] Open
Abstract
Background Systematic reviews (SRs) in the field of neuropathic pain (NeuP) are increasingly important for decision-making. However, methodological flaws in SRs can reduce the validity of conclusions. Hence, it is important to assess the methodological quality of NeuP SRs critically. Additionally, it remains unclear which assessment tool should be used. We studied the methodological quality of SRs published in the field of NeuP and compared two assessment tools. Methods We systematically searched 5 electronic databases to identify SRs of randomized controlled trials of interventions for NeuP available up to March 2015. Two independent reviewers assessed the methodological quality of the studies using the Assessment of Multiple Systematic Reviews (AMSTAR) and the revised AMSTAR (R-AMSTAR) tools. The scores were converted to percentiles and ranked into 4 grades to allow comparison between the two checklists. Gwet’s AC1 coefficient was used for interrater reliability assessment. Results The 97 included SRs had a wide range of methodological quality scores (AMSTAR median (IQR): 6 (5–8) vs. R-AMSTAR median (IQR): 30 (26–35)). The overall agreement score between the 2 raters was 0.62 (95% CI 0.39–0.86) for AMSTAR and 0.62 (95% CI 0.53–0.70) for R-AMSTAR. The 31 Cochrane systematic reviews (CSRs) were consistently ranked higher than the 66 non-Cochrane systematic reviews (NCSRs). The analysis of individual domains showed the best compliance in a comprehensive literature search (item 3) on both checklists. The results for the domain that was the least compliant differed: conflict of interest (item 11) was the item most poorly reported on AMSTAR vs. publication bias assessment (item 10) on R-AMSTAR. A high positive correlation between the total AMSTAR and R-AMSTAR scores for all SRs, as well as for CSRs and NCSRs, was observed. Conclusions The methodological quality of analyzed SRs in the field of NeuP was not optimal, and CSRs had a higher quality than NCSRs. Both AMSTAR and R-AMSTAR tools produced comparable quality ratings. Our results point out to weaknesses in the methodology of existing SRs on interventions for the management NeuP and call for future improvement by better adherence to analyzed quality checklists, either AMSTAR or R-AMSTAR. Electronic supplementary material The online version of this article (10.1186/s12874-018-0493-y) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Svjetlana Dosenovic
- Department of Anesthesiology and Intensive Care Medicine, University Hospital Split, Split, Croatia.,Laboratory for Pain Research, University of Split School of Medicine, Soltanska 2, 21000, Split, Croatia
| | - Antonia Jelicic Kadic
- Laboratory for Pain Research, University of Split School of Medicine, Soltanska 2, 21000, Split, Croatia.,Department of Pediatrics, University Hospital Split, Split, Croatia
| | - Katarina Vucic
- Agency for Medicinal Products and Medical Devices, Zagreb, Croatia
| | - Nikolina Markovina
- Laboratory for Pain Research, University of Split School of Medicine, Soltanska 2, 21000, Split, Croatia
| | - Dawid Pieper
- Institute for Research in Operative Medicine (IFOM), Witten/Herdecke University, Cologne, Germany
| | - Livia Puljak
- Laboratory for Pain Research, University of Split School of Medicine, Soltanska 2, 21000, Split, Croatia. .,Agency for Quality and Accreditation in Health Care and Social Welfare, Zagreb, Croatia.
| |
Collapse
|
30
|
Page MJ, Moher D. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement and extensions: a scoping review. Syst Rev 2017; 6:263. [PMID: 29258593 PMCID: PMC5738221 DOI: 10.1186/s13643-017-0663-8] [Citation(s) in RCA: 396] [Impact Index Per Article: 49.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Accepted: 12/08/2017] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The PRISMA Statement is a reporting guideline designed to improve transparency of systematic reviews (SRs) and meta-analyses. Seven extensions to the PRISMA Statement have been published to address the reporting of different types or aspects of SRs, and another eight are in development. We performed a scoping review to map the research that has been conducted to evaluate the uptake and impact of the PRISMA Statement and extensions. We also synthesised studies evaluating how well SRs published after the PRISMA Statement was disseminated adhere to its recommendations. METHODS We searched for meta-research studies indexed in MEDLINE® from inception to 31 July 2017, which investigated some component of the PRISMA Statement or extensions (e.g. SR adherence to PRISMA, journal endorsement of PRISMA). One author screened all records and classified the types of evidence available in the studies. We pooled data on SR adherence to individual PRISMA items across all SRs in the included studies and across SRs published after 2009 (the year PRISMA was disseminated). RESULTS We included 100 meta-research studies. The most common type of evidence available was data on SR adherence to the PRISMA Statement, which has been evaluated in 57 studies that have assessed 6487 SRs. The pooled results of these studies suggest that reporting of many items in the PRISMA Statement is suboptimal, even in the 2382 SRs published after 2009 (where nine items were adhered to by fewer than 67% of SRs). Few meta-research studies have evaluated the adherence of SRs to the PRISMA extensions or strategies to increase adherence to the PRISMA Statement and extensions. CONCLUSIONS Many studies have evaluated how well SRs adhere to the PRISMA Statement, and the pooled result of these suggest that reporting of many items is suboptimal. An update of the PRISMA Statement, along with a toolkit of strategies to help journals endorse and implement the updated guideline, may improve the transparency of SRs.
Collapse
Affiliation(s)
- Matthew J. Page
- School of Public Health and Preventive Medicine, Monash University, 553 St Kilda Road, Melbourne, VIC 3004 Australia
| | - David Moher
- Centre for Journalology and Canadian EQUATOR Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, K1H 8L6 Canada
- School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, K1H 8M5 Canada
| |
Collapse
|
31
|
Tam WWS, Lo KKH, Khalechelvam P, Seah J, Goh SYS. Is the information of systematic reviews published in nursing journals up-to-date? a cross-sectional study. BMC Med Res Methodol 2017; 17:151. [PMID: 29178832 PMCID: PMC5702238 DOI: 10.1186/s12874-017-0432-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Accepted: 11/16/2017] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND An up-to-date systematic review is important for researchers to decide whether to embark on new research or continue supporting ongoing studies. The aim of this study is to examine the time taken between the last search, submission, acceptance and publication dates of systematic reviews published in nursing journals. METHODS Nursing journals indexed in Journal Citation Reports were first identified. Thereafter, systematic reviews published in these journals in 2014 were extracted from three databases. The quality of the systematic reviews were evaluated by the AMSTAR. The last search, submission, acceptance, online publication, full publication dates and other characteristics of the systematic reviews were recorded. The time taken between the five dates was then computed. Descriptive statistics were used to summarize the time differences; non-parametric statistics were used to examine the association between the time taken from the last search and full publication alongside other potential factors, including the funding support, submission during holiday periods, number of records retrieved from database, inclusion of meta-analysis, and quality of the review. RESULTS A total of 107 nursing journals were included in this study, from which 1070 articles were identified through the database search. After screening for eligibility, 202 systematic reviews were included in the analysis. The quality of these reviews was low with the median score of 3 out of 11. A total of 172 (85.1%), 72 (35.6%), 153 (75.7%) and 149 (73.8%) systematic reviews provided their last search, submission, acceptance and online published dates respectively. The median numbers of days taken from the last search to acceptance and to full publication were, respectively, 393 (IQR: 212-609) and 669 (427-915) whereas that from submission to full publication was 365 (243-486). Moreover, the median number of days from the last search to submission and from submission to online publication were 167.5 (53.5-427) and 153 (92-212), respectively. No significant association were revealed between the time lag and those potential factors. CONCLUSION The median time from the last search to acceptance for systematic reviews published in nursing journals was 393 days. Readers for systematic reviews are advised to check the time taken from the last search date of the reviews in order to ensure that up-to-date evidence is consulted for effective clinical decision-making.
Collapse
Affiliation(s)
- Wilson W. S. Tam
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD11, 10 Medical Drive, Singapore, 117597 Singapore
| | - Kenneth K. H. Lo
- 4/F, JC School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, HKSAR Hong Kong
| | - Parames Khalechelvam
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD11, 10 Medical Drive, Singapore, 117597 Singapore
| | - Joey Seah
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD11, 10 Medical Drive, Singapore, 117597 Singapore
| | - Shawn Y. S. Goh
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD11, 10 Medical Drive, Singapore, 117597 Singapore
| |
Collapse
|
32
|
Restrictive or responsive? Outcome classification and unplanned sub-group analyses in meta-analyses. Anaesthesia 2017; 73:279-283. [DOI: 10.1111/anae.14078] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2017] [Indexed: 11/26/2022]
|