1
|
Kaiser I, Pfahlberg AB, Mathes S, Uter W, Diehl K, Steeb T, Heppt MV, Gefeller O. Inter-Rater Agreement in Assessing Risk of Bias in Melanoma Prediction Studies Using the Prediction Model Risk of Bias Assessment Tool (PROBAST): Results from a Controlled Experiment on the Effect of Specific Rater Training. J Clin Med 2023; 12:jcm12051976. [PMID: 36902763 PMCID: PMC10003882 DOI: 10.3390/jcm12051976] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 02/27/2023] [Accepted: 02/28/2023] [Indexed: 03/06/2023] Open
Abstract
Assessing the risk of bias (ROB) of studies is an important part of the conduct of systematic reviews and meta-analyses in clinical medicine. Among the many existing ROB tools, the Prediction Model Risk of Bias Assessment Tool (PROBAST) is a rather new instrument specifically designed to assess the ROB of prediction studies. In our study we analyzed the inter-rater reliability (IRR) of PROBAST and the effect of specialized training on the IRR. Six raters independently assessed the risk of bias (ROB) of all melanoma risk prediction studies published until 2021 (n = 42) using the PROBAST instrument. The raters evaluated the ROB of the first 20 studies without any guidance other than the published PROBAST literature. The remaining 22 studies were assessed after receiving customized training and guidance. Gwet's AC1 was used as the primary measure to quantify the pairwise and multi-rater IRR. Depending on the PROBAST domain, results before training showed a slight to moderate IRR (multi-rater AC1 ranging from 0.071 to 0.535). After training, the multi-rater AC1 ranged from 0.294 to 0.780 with a significant improvement for the overall ROB rating and two of the four domains. The largest net gain was achieved in the overall ROB rating (difference in multi-rater AC1: 0.405, 95%-CI 0.149-0.630). In conclusion, without targeted guidance, the IRR of PROBAST is low, questioning its use as an appropriate ROB instrument for prediction studies. Intensive training and guidance manuals with context-specific decision rules are needed to correctly apply and interpret the PROBAST instrument and to ensure consistency of ROB ratings.
Collapse
Affiliation(s)
- Isabelle Kaiser
- Department of Medical Informatics, Biometry and Epidemiology, Friedrich Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
- Correspondence:
| | - Annette B. Pfahlberg
- Department of Medical Informatics, Biometry and Epidemiology, Friedrich Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Sonja Mathes
- Department of Dermatology and Allergy Biederstein, Faculty of Medicine, Technical University of Munich, 80802 Munich, Germany
| | - Wolfgang Uter
- Department of Medical Informatics, Biometry and Epidemiology, Friedrich Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Katharina Diehl
- Department of Medical Informatics, Biometry and Epidemiology, Friedrich Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Theresa Steeb
- Department of Dermatology, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Markus V. Heppt
- Department of Dermatology, University Hospital Erlangen, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-European Metropolitan Area of Nuremberg (CCC ER-EMN), 91054 Erlangen, Germany
| | - Olaf Gefeller
- Department of Medical Informatics, Biometry and Epidemiology, Friedrich Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| |
Collapse
|
2
|
Anderson R, Booth A, Eastwood A, Rodgers M, Shaw L, Thompson Coon J, Briscoe S, Cantrell A, Chambers D, Goyder E, Nunns M, Preston L, Raine G, Thomas S. Synthesis for health services and policy: case studies in the scoping of reviews. HEALTH SERVICES AND DELIVERY RESEARCH 2021. [DOI: 10.3310/hsdr09150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Background
For systematic reviews to be rigorous, deliverable and useful, they need a well-defined review question. Scoping for a review also requires the specification of clear inclusion criteria and planned synthesis methods. Guidance is lacking on how to develop these, especially in the context of undertaking rapid and responsive systematic reviews to inform health services and health policy.
Objective
This report describes and discusses the experiences of review scoping of three commissioned research centres that conducted evidence syntheses to inform health and social care organisation, delivery and policy in the UK, between 2017 and 2020.
Data sources
Sources included researcher recollection, project meeting minutes, e-mail correspondence with stakeholders and scoping searches, from allocation of a review topic through to review protocol agreement.
Methods
We produced eight descriptive case studies of selected reviews from the three teams. From case studies, we identified key issues that shape the processes of scoping and question formulation for evidence synthesis. The issues were then discussed and lessons drawn.
Findings
Across the eight diverse case studies, we identified 14 recurrent issues that were important in shaping the scoping processes and formulating a review’s questions. There were ‘consultative issues’ that related to securing input from review commissioners, policy customers, experts, patients and other stakeholders. These included managing and deciding priorities, reconciling different priorities/perspectives, achieving buy-in and engagement, educating the end-user about synthesis processes and products, and managing stakeholder expectations. There were ‘interface issues’ that related to the interaction between the review team and potential review users. These included identifying the niche/gap and optimising value, assuring and balancing rigour/reliability/relevance, and assuring the transferability/applicability of study evidence to specific policy/service user contexts. There were also ‘technical issues’ that were associated with the methods and conduct of the review. These were choosing the method(s) of synthesis, balancing fixed and fluid review questions/components/definitions, taking stock of what research already exists, mapping versus scoping versus reviewing, scoping/relevance as a continuous process and not just an initial stage, and calibrating general compared with specific and broad compared with deep coverage of topics.
Limitations
As a retrospective joint reflection by review teams on their experiences of scoping processes, this report is not based on prospectively collected research data. In addition, our evaluations were not externally validated by, for example, policy and service evidence users or patients and the public.
Conclusions
We have summarised our reflections on scoping from this programme of reviews as 14 common issues and 28 practical ‘lessons learned’. Effective scoping of rapid, responsive reviews extends beyond information exchange and technical procedures for specifying a ‘gap’ in the evidence. These considerations work alongside social processes, in particular the building of relationships and shared understanding between reviewers, research commissioners and potential review users that may be reflective of consultancy, negotiation and co-production models of research and information use.
Funding
This report has been based on work commissioned by the National Institute for Health Research (NIHR) Health Services and Delivery Research (HSDR) programme as three university-based evidence synthesis centres to inform the organisation, delivery and commissioning of health and social care; at the University of Exeter (NIHR 16/47/22), the University of Sheffield (NIHR 16/47/17) and the University of York (NIHR 16/47/11). This report was commissioned by the NIHR HSDR programme as a review project (NIHR132708) within the NIHR HSDR programme. This project was funded by the NIHR HSDR programme and will be published in full in Health Services and Delivery Research; Vol. 9, No. 15. See the NIHR Journals Library website for further project information.
Collapse
Affiliation(s)
- Rob Anderson
- Exeter Health Services and Delivery Research Evidence Synthesis Centre, Institute of Health Research, University of Exeter Medical School, Exeter, UK
| | - Andrew Booth
- Sheffield Health Services and Delivery Research Evidence Synthesis Centre, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - Alison Eastwood
- York Health Service and Delivery Research Evidence Synthesis Centre, Centre for Reviews and Dissemination, University of York, York, UK
| | - Mark Rodgers
- York Health Service and Delivery Research Evidence Synthesis Centre, Centre for Reviews and Dissemination, University of York, York, UK
| | - Liz Shaw
- Exeter Health Services and Delivery Research Evidence Synthesis Centre, Institute of Health Research, University of Exeter Medical School, Exeter, UK
| | - Jo Thompson Coon
- Exeter Health Services and Delivery Research Evidence Synthesis Centre, Institute of Health Research, University of Exeter Medical School, Exeter, UK
- National Institute for Health Research Applied Research Collaboration South West Peninsula, Devon, Cornwall and Somerset, UK
| | - Simon Briscoe
- Exeter Health Services and Delivery Research Evidence Synthesis Centre, Institute of Health Research, University of Exeter Medical School, Exeter, UK
| | - Anna Cantrell
- Sheffield Health Services and Delivery Research Evidence Synthesis Centre, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - Duncan Chambers
- Sheffield Health Services and Delivery Research Evidence Synthesis Centre, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - Elizabeth Goyder
- Sheffield Health Services and Delivery Research Evidence Synthesis Centre, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - Michael Nunns
- Exeter Health Services and Delivery Research Evidence Synthesis Centre, Institute of Health Research, University of Exeter Medical School, Exeter, UK
| | - Louise Preston
- Sheffield Health Services and Delivery Research Evidence Synthesis Centre, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - Gary Raine
- York Health Service and Delivery Research Evidence Synthesis Centre, Centre for Reviews and Dissemination, University of York, York, UK
| | - Sian Thomas
- York Health Service and Delivery Research Evidence Synthesis Centre, Centre for Reviews and Dissemination, University of York, York, UK
| |
Collapse
|
3
|
Desmarais SL. Commentary: Risk Assessment in the Age of Evidence-Based Practice and Policy. INTERNATIONAL JOURNAL OF FORENSIC MENTAL HEALTH 2017; 16:18-22. [PMID: 30111986 PMCID: PMC6089527 DOI: 10.1080/14999013.2016.1266422] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Risk assessment has come to be recognized as a key component of evidence-based practice and policy in psychiatric and correctional agencies. At the same time, however, there is significant debate in scientific, policy, and public arenas regarding the role of risk assessment instruments in mental health and criminal justice decision-making, and questions regarding the level of evidence supporting their usefulness. It is in light of these conflicting realities that the current commentary considers Williams, Wormith, Bonta and Sitarenios' (2017) re-examination of the Singh, Grann, and Fazel (2011) meta-analysis and recommendations made in "The Use of Meta-Analysis to Compare and Select Offender Risk Instruments." Additional limitations in the extant risk assessment research are identified and their implications for evidence-based practice and policy are discussed.
Collapse
|
5
|
Sundqvist G, Bohlin I, Hermansen EAT, Yearley S. Formalization and separation: A systematic basis for interpreting approaches to summarizing science for climate policy. SOCIAL STUDIES OF SCIENCE 2015; 45:416-440. [PMID: 26477199 DOI: 10.1177/0306312715583737] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In studies of environmental issues, the question of how to establish a productive interplay between science and policy is widely debated, especially in relation to climate change. The aim of this article is to advance this discussion and contribute to a better understanding of how science is summarized for policy purposes by bringing together two academic discussions that usually take place in parallel: the question of how to deal with formalization (structuring the procedures for assessing and summarizing research, e.g. by protocols) and separation (maintaining a boundary between science and policy in processes of synthesizing science for policy). Combining the two dimensions, we draw a diagram onto which different initiatives can be mapped. A high degree of formalization and separation are key components of the canonical image of scientific practice. Influential Science and Technology Studies analysts, however, are well known for their critiques of attempts at separation and formalization. Three examples that summarize research for policy purposes are presented and mapped onto the diagram: the Intergovernmental Panel on Climate Change, the European Union's Science for Environment Policy initiative, and the UK Committee on Climate Change. These examples bring out salient differences concerning how formalization and separation are dealt with. Discussing the space opened up by the diagram, as well as the limitations of the attraction to its endpoints, we argue that policy analyses, including much Science and Technology Studies work, are in need of a more nuanced understanding of the two crucial dimensions of formalization and separation. Accordingly, two analytical claims are presented, concerning trajectories, how organizations represented in the diagram move over time, and mismatches, how organizations fail to handle the two dimensions well in practice.
Collapse
|
6
|
Doi SAR, Barendregt JJ, Khan S, Thalib L, Williams GM. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model. Contemp Clin Trials 2015; 45:130-8. [PMID: 26003435 DOI: 10.1016/j.cct.2015.05.009] [Citation(s) in RCA: 411] [Impact Index Per Article: 41.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2015] [Revised: 05/10/2015] [Accepted: 05/15/2015] [Indexed: 01/11/2023]
Abstract
This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com.
Collapse
Affiliation(s)
- Suhail A R Doi
- Research School of Population Health, Australian National University, Canberra, Australia.
| | - Jan J Barendregt
- Epigear International, Sunrise Beach, Australia; School of Population Health, University of Queensland, Brisbane, Australia
| | - Shahjahan Khan
- School of Agricultural, Computational and Environmental Sciences, University of Southern Queensland, Toowoomba, Australia
| | - Lukman Thalib
- Department of Community Medicine, Kuwait University, Kuwait
| | - Gail M Williams
- School of Population Health, University of Queensland, Brisbane, Australia
| |
Collapse
|
7
|
Abstract
This article looks at the impact of meta-analysis and then explores why meta-analysis was developed at the time and by the scholars it did in the social sciences in the 1970s. For the first problem, impact, it examines the impact of meta-analysis using citation network analysis. The impact is seen in the sciences, arts and humanities, and on such contemporaneous developments as multilevel modeling, medical statistics, qualitative methods, program evaluation, and single-case design. Using a constrained snowball sample of citations, we highlight key articles that are either most highly cited or most central to the systematic review network. Then, the article examines why meta-analysis came to be in the 1970s in the social sciences through the work of Gene Glass, Robert Rosenthal, and Frank Schmidt, each of whom developed similar theories of meta-analysis at about the same time. The article ends by explaining how Simonton's chance configuration theory and Campbell's evolutionary epistemology can illuminate why meta-analysis occurred with these scholars when it did and not in medical sciences.
Collapse
Affiliation(s)
- William R Shadish
- School of Social Sciences, Humanities and Arts, University of California, Merced, CA, USA
| | - Jesse D Lecy
- 420 Maxwell School of Syracuse University, Syracuse, NY, USA
| |
Collapse
|