1
|
Thiessen M, Vogel JA, Byyny RL, Hopkins E, Haukoos JS, Kendall JL, Trent SA. Emergency Ultrasound Literature and Adherence to Standards for Reporting of Diagnostic Accuracy Criteria. J Emerg Med 2019; 58:636-646. [PMID: 31708317 DOI: 10.1016/j.jemermed.2019.09.029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 09/16/2019] [Accepted: 09/20/2019] [Indexed: 01/02/2023]
Abstract
BACKGROUND Given the wide usage of emergency point-of-care ultrasound (EUS) among emergency physicians (EPs), rigorous study surrounding its accuracy is essential. The Standards for Reporting of Diagnostic Accuracy (STARD) criteria were established to ensure robust reporting methodology for diagnostic studies. Adherence to the STARD criteria among EUS diagnostic studies has yet to be reported. OBJECTIVES Our objective was to evaluate a body of EUS literature shortly after STARD publication for its baseline adherence to the STARD criteria. METHODS EUS studies in 5 emergency medicine journals from 2005-2010 were evaluated for their adherence to the STARD criteria. Manuscripts were selected for inclusion if they reported original research and described the use of 1 of 10 diagnostic ultrasound modalities designated as "core emergency ultrasound applications" in the 2008 American College of Emergency Physicians Ultrasound Guidelines. Literature search identified 307 studies; of these, 45 met inclusion criteria for review. RESULTS The median STARD score was 15 (interquartile range [IQR] 12-17), representing 60% of the 25 total STARD criteria. The median STARD score among articles that reported diagnostic accuracy was significantly higher than those that did not report accuracy (17 [IQR 15-19] vs. 11 [IQR 9-13], respectively; p < 0.0001). Seventy-one percent of articles met ≥50% of the STARD criteria (56-84%) and 4% met >80% of the STARD criteria. CONCLUSIONS Significant opportunities exist to improve methodological reporting of EUS research. Increased adherence to the STARD criteria among diagnostic EUS studies will improve reporting and improve our ability to compare outcomes.
Collapse
Affiliation(s)
- Molly Thiessen
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| | - Jody A Vogel
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| | - Richard L Byyny
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| | - Emily Hopkins
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado
| | - Jason S Haukoos
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado; Department of Epidemiology, Colorado School of Public Health, Aurora, Colorado
| | - John L Kendall
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| | - Stacy A Trent
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| |
Collapse
|
2
|
Adequate Reporting of Dental Diagnostic Accuracy Studies is Lacking: An Assessment of Reporting in Relation to the Standards for Reporting of Diagnostic Accuracy Studies Statement. J Evid Based Dent Pract 2019; 19:283-294. [DOI: 10.1016/j.jebdp.2019.02.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 02/07/2019] [Accepted: 02/28/2019] [Indexed: 11/22/2022]
|
3
|
Blanco D, Altman D, Moher D, Boutron I, Kirkham JJ, Cobo E. Scoping review on interventions to improve adherence to reporting guidelines in health research. BMJ Open 2019; 9:e026589. [PMID: 31076472 PMCID: PMC6527996 DOI: 10.1136/bmjopen-2018-026589] [Citation(s) in RCA: 96] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
OBJECTIVES The goal of this study is to identify, analyse and classify interventions to improve adherence to reporting guidelines in order to obtain a wide picture of how the problem of enhancing the completeness of reporting of biomedical literature has been tackled so far. DESIGN Scoping review. SEARCH STRATEGY We searched the MEDLINE, EMBASE and Cochrane Library databases and conducted a grey literature search for (1) studies evaluating interventions to improve adherence to reporting guidelines in health research and (2) other types of references describing interventions that have been performed or suggested but never evaluated. The characteristics and effect of the evaluated interventions were analysed. Moreover, we explored the rationale of the interventions identified and determined the existing gaps in research on the evaluation of interventions to improve adherence to reporting guidelines. RESULTS 109 references containing 31 interventions (11 evaluated) were included. These were grouped into five categories: (1) training on the use of reporting guidelines, (2) improving understanding, (3) encouraging adherence, (4) checking adherence and providing feedback, and (5) involvement of experts. Additionally, we identified lack of evaluated interventions (1) on training on the use of reporting guidelines and improving their understanding, (2) at early stages of research and (3) after the final acceptance of the manuscript. CONCLUSIONS This scoping review identified a wide range of strategies to improve adherence to reporting guidelines that can be taken by different stakeholders. Additional research is needed to assess the effectiveness of many of these interventions.
Collapse
Affiliation(s)
- David Blanco
- Statistics and Operations Research, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Doug Altman
- Nuffield Department ofOrthopaedics, Rheumatologyand Musculoskeletal Sciences,Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| | - David Moher
- Centre for Journalology, Ottawa Hospital Research Institute, Ottawa, Canada
| | - Isabelle Boutron
- Centre d\'épidémiologie Clinique, Université Paris Descartes, Paris, France
| | - Jamie J Kirkham
- Biostatistics, University of Liverpool, Liverpool, Merseyside, UK
| | - Erik Cobo
- Statistics and Operations Research, Universitat Politècnica de Catalunya, Barcelona, Spain
| |
Collapse
|
4
|
Zarei F, Zeinali-Rafsanjani B. Assessment of Adherence of Diagnostic Accuracy Studies Published in Radiology Journals to STARD Statement Indexed in Web of Science, PubMed & Scopus in 2015. J Biomed Phys Eng 2018; 8:311-324. [PMID: 30320035 PMCID: PMC6169121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2017] [Accepted: 10/15/2017] [Indexed: 11/01/2022]
Abstract
RATIONALE AND OBJECTIVE The objective of this study is to evaluate the methodological adherence of diagnostic accuracy studies published in radiology journals, which were indexed in different databases with the STARD standard guide 2015. MATERIALS AND METHODS The different databases were searched in order to find suitable journals. Among 84 English radiology journals, 31 journal were selected randomly. In order to find the articles, the same search fields and search terms were used. All the items of STARD checklist 2015 were considered to take in to account in assessment of the adherence of the articles to the standard. Total STARD score for each article was calculated by summing the number of reported items. RESULTS 151 articles from 31 journals were evaluated to check the adherence of their structure to STARD standard. Based on the results the articles had the most adherence with the STARD standard in material and method part the item of participants, discussion section, and title or abstract. On the contrary, most of the articles were not adhere to other information which are new items in STARD 2015. Among radiology diagnostic accuracy articles only one article (0.66%) had a registration number and 10 (6.62%) articles had a link to full study protocol. More than 60% of articles adhered to the ethics (69.54%) and source of support (63.58%). CONCLUSIONS The radiology diagnostic accuracy studies were adhered to 69.45% STARD items, which shows an improvement in reporting the diagnostic accuracy articles in comparison to previous studies.
Collapse
Affiliation(s)
- F. Zarei
- Department of Medical Journalism, School of Para-Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
,Medical Imaging Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - B. Zeinali-Rafsanjani
- Medical Imaging Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
5
|
Lefebvre C, Glanville J, Beale S, Boachie C, Duffy S, Fraser C, Harbour J, McCool R, Smith L. Assessing the performance of methodological search filters to improve the efficiency of evidence information retrieval: five literature reviews and a qualitative study. Health Technol Assess 2018; 21:1-148. [PMID: 29188764 DOI: 10.3310/hta21690] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. OBJECTIVES This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. METHODS Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. RESULTS The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. LIMITATIONS The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. CONCLUSIONS Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. FUNDING The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496).
Collapse
Affiliation(s)
- Carol Lefebvre
- UK Cochrane Centre, Oxford, UK.,Lefebvre Associates Ltd, Oxford, UK
| | | | | | - Charles Boachie
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | | | - Cynthia Fraser
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | | | | | - Lynne Smith
- Healthcare Improvement Scotland, Glasgow, UK
| |
Collapse
|
6
|
Sekula P, Mallett S, Altman DG, Sauerbrei W. Did the reporting of prognostic studies of tumour markers improve since the introduction of REMARK guideline? A comparison of reporting in published articles. PLoS One 2017; 12:e0178531. [PMID: 28614415 PMCID: PMC5470677 DOI: 10.1371/journal.pone.0178531] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Accepted: 05/15/2017] [Indexed: 01/07/2023] Open
Abstract
Although biomarkers are perceived as highly relevant for future clinical practice, few biomarkers reach clinical utility for several reasons. Among them, poor reporting of studies is one of the major problems. To aid improvement, reporting guidelines like REMARK for tumour marker prognostic (TMP) studies were introduced several years ago. The aims of this project were to assess whether reporting quality of TMP-studies improved in comparison to a previously conducted study assessing reporting quality of TMP-studies (PRE-study) and to assess whether articles citing REMARK (citing group) are better reported, in comparison to articles not citing REMARK (not-citing group). For the POST-study, recent articles citing and not citing REMARK (53 each) were identified in selected journals through systematic literature search and evaluated in same way as in the PRE-study. Ten of the 20 items of the REMARK checklist were evaluated and used to define an overall score of reporting quality. The observed overall scores were 53.4% (range: 10%-90%) for the PRE-study, 57.7% (range: 20%-100%) for the not-citing group and 58.1% (range: 30%-100%) for the citing group of the POST-study. While there is no difference between the two groups of the POST-study, the POST-study shows a slight but not relevant improvement in reporting relative to the PRE-study. Not all the articles of the citing group, cited REMARK appropriately. Irrespective of whether REMARK was cited, the overall score was slightly higher for articles published in journals requesting adherence to REMARK than for those published in journals not requesting it: 59.9% versus 51.9%, respectively. Several years after the introduction of REMARK, many key items of TMP-studies are still very poorly reported. A combined effort is needed from authors, editors, reviewers and methodologists to improve the current situation. Good reporting is not just nice to have but is essential for any research to be useful.
Collapse
Affiliation(s)
- Peggy Sekula
- Institute for Medical Biometry and Statistics, Faculty of Medicine and Medical Center-University of Freiburg, Freiburg, Germany
| | - Susan Mallett
- Institute of Applied Health Research, University of Birmingham, Edgbaston, Birmingham, United Kingdom
| | - Douglas G Altman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, United Kingdom
| | - Willi Sauerbrei
- Institute for Medical Biometry and Statistics, Faculty of Medicine and Medical Center-University of Freiburg, Freiburg, Germany
| |
Collapse
|
7
|
Fidalgo BMR, Crabb DP, Lawrenson JG. Methodology and reporting of diagnostic accuracy studies of automated perimetry in glaucoma: evaluation using a standardised approach. Ophthalmic Physiol Opt 2016; 35:315-23. [PMID: 25913874 DOI: 10.1111/opo.12208] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2015] [Accepted: 03/12/2015] [Indexed: 11/29/2022]
Abstract
PURPOSE To evaluate methodological and reporting quality of diagnostic accuracy studies of perimetry in glaucoma and to determine whether there had been any improvement since the publication of the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. METHODS A systematic review of English language articles published between 1993 and 2013 reporting the diagnostic accuracy of perimetry in glaucoma. Articles were appraised for methodological quality using the 14-item Quality assessment tool for diagnostic accuracy studies (QUADAS) and evaluated for quality of reporting by applying the STARD checklist. RESULTS Fifty-eight articles were appraised. Overall methodological quality of these studies was moderate with a median number of QUADAS items rated as 'yes' equal to nine (out of a maximum of 14) (IQR 7-10). The studies were often poorly reported; median score of STARD items fully reported was 11 out of 25 (IQR 10-14). A comparison of the studies published in 10-year periods before and after the publication of the STARD checklist in 2003 found quality of reporting had not substantially improved. CONCLUSIONS Methodological and reporting quality of diagnostic accuracy studies of perimetry is sub-optimal and appears not to have improved substantially following the development of the STARD reporting guidance. This observation is consistent with previous studies in ophthalmology and in other medical specialities.
Collapse
Affiliation(s)
- Bruno M R Fidalgo
- Division of Optometry and Visual Science, City University London, London, UK
| | | | | |
Collapse
|
8
|
Hu ZD. STARD guideline in diagnostic accuracy tests: perspective from a systematic reviewer. ANNALS OF TRANSLATIONAL MEDICINE 2016; 4:46. [PMID: 26904568 DOI: 10.3978/j.issn.2305-5839.2016.01.03] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Zhi-De Hu
- Department of Laboratory Medicine, General Hospital of Ji'nan Military Region of PLA, Shandong 250031, China
| |
Collapse
|
9
|
Stevens A, Shamseer L, Weinstein E, Yazdi F, Turner L, Thielman J, Altman DG, Hirst A, Hoey J, Palepu A, Schulz KF, Moher D. Relation of completeness of reporting of health research to journals' endorsement of reporting guidelines: systematic review. BMJ 2014; 348:g3804. [PMID: 24965222 PMCID: PMC4070413 DOI: 10.1136/bmj.g3804] [Citation(s) in RCA: 181] [Impact Index Per Article: 18.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/27/2014] [Indexed: 12/16/2022]
Abstract
OBJECTIVE To assess whether the completeness of reporting of health research is related to journals' endorsement of reporting guidelines. DESIGN Systematic review. DATA SOURCES Reporting guidelines from a published systematic review and the EQUATOR Network (October 2011). Studies assessing the completeness of reporting by using an included reporting guideline (termed "evaluations") (1990 to October 2011; addendum searches in January 2012) from searches of either Medline, Embase, and the Cochrane Methodology Register or Scopus, depending on reporting guideline name. STUDY SELECTION English language reporting guidelines that provided explicit guidance for reporting, described the guidance development process, and indicated use of a consensus development process were included. The CONSORT statement was excluded, as evaluations of adherence to CONSORT had previously been reviewed. English or French language evaluations of included reporting guidelines were eligible if they assessed the completeness of reporting of studies as a primary intent and those included studies enabled the comparisons of interest (that is, after versus before journal endorsement and/or endorsing versus non-endorsing journals). DATA EXTRACTION Potentially eligible evaluations of included guidelines were screened initially by title and abstract and then as full text reports. If eligibility was unclear, authors of evaluations were contacted; journals' websites were consulted for endorsement information where needed. The completeness of reporting of reporting guidelines was analyzed in relation to endorsement by item and, where consistent with the authors' analysis, a mean summed score. RESULTS 101 reporting guidelines were included. Of 15,249 records retrieved from the search for evaluations, 26 evaluations that assessed completeness of reporting in relation to endorsement for nine reporting guidelines were identified. Of those, 13 evaluations assessing seven reporting guidelines (BMJ economic checklist, CONSORT for harms, PRISMA, QUOROM, STARD, STRICTA, and STROBE) could be analyzed. Reporting guideline items were assessed by few evaluations. CONCLUSIONS The completeness of reporting of only nine of 101 health research reporting guidelines (excluding CONSORT) has been evaluated in relation to journals' endorsement. Items from seven reporting guidelines were quantitatively analyzed, by few evaluations each. Insufficient evidence exists to determine the relation between journals' endorsement of reporting guidelines and the completeness of reporting of published health research reports. Journal editors and researchers should consider collaborative prospectively designed, controlled studies to provide more robust evidence. SYSTEMATIC REVIEW REGISTRATION Not registered; no known register currently accepts protocols for methodology systematic reviews.
Collapse
Affiliation(s)
- Adrienne Stevens
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6
| | - Larissa Shamseer
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6 Department of Epidemiology and Community Medicine, University of Ottawa, K1H 8M5 Ottawa, Canada
| | - Erica Weinstein
- Albert Einstein College of Medicine, Yeshiva University, Bronx, NY 10461, USA
| | - Fatemeh Yazdi
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6
| | - Lucy Turner
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6
| | - Justin Thielman
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6
| | - Douglas G Altman
- Centre for Statistics in Medicine, University of Oxford, Oxford OX3 7LD, UK
| | - Allison Hirst
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford OX3 9DU, UK
| | - John Hoey
- Population and Public Health Initiative, Queen's University, Kingston, ON, Canada, K7L 3N6
| | - Anita Palepu
- Centre for Health Evaluation and Outcome Sciences, St Paul's Hospital, Vancouver, BC, Canada, V6Z 1Y9 Department of Medicine, University of British Columbia, Vancouver, BC, Canada, V5Z 1M9
| | - Kenneth F Schulz
- International Clinical Sciences Support Center, FHI 360, Durham, NC 27713, USA
| | - David Moher
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6 Department of Epidemiology and Community Medicine, University of Ottawa, K1H 8M5 Ottawa, Canada
| |
Collapse
|
10
|
Misra S, Barth JH. How good is the evidence base for test selection in clinical guidelines? Clin Chim Acta 2014; 432:27-32. [DOI: 10.1016/j.cca.2014.01.040] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2013] [Revised: 01/24/2014] [Accepted: 01/30/2014] [Indexed: 10/25/2022]
|
11
|
Scientific reporting is suboptimal for aspects that characterize genetic risk prediction studies: a review of published articles based on the Genetic RIsk Prediction Studies statement. J Clin Epidemiol 2014; 67:487-99. [DOI: 10.1016/j.jclinepi.2013.10.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2013] [Revised: 10/03/2013] [Accepted: 10/09/2013] [Indexed: 12/29/2022]
|
12
|
Walther S, Schueler S, Tackmann R, Schuetz GM, Schlattmann P, Dewey M. Compliance with STARD Checklist among Studies of Coronary CT Angiography: Systematic Review. Radiology 2014; 271:74-86. [DOI: 10.1148/radiol.13121720] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
13
|
Meta-analyses including data from observational studies. Prev Vet Med 2014; 113:313-22. [DOI: 10.1016/j.prevetmed.2013.10.017] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2013] [Revised: 10/09/2013] [Accepted: 10/19/2013] [Indexed: 11/23/2022]
|
14
|
Grindlay DJC, Dean RS, Christopher MM, Brennan ML. A survey of the awareness, knowledge, policies and views of veterinary journal Editors-in-Chief on reporting guidelines for publication of research. BMC Vet Res 2014; 10:10. [PMID: 24410882 PMCID: PMC3922819 DOI: 10.1186/1746-6148-10-10] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2013] [Accepted: 11/26/2013] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Wider adoption of reporting guidelines by veterinary journals could improve the quality of published veterinary research. The aims of this study were to assess the knowledge and views of veterinary Editors-in-Chief on reporting guidelines, identify the policies of their journals, and determine their information needs. Editors-in-Chief of 185 journals on the contact list for the International Association of Veterinary Editors (IAVE) were surveyed in April 2012 using an online questionnaire which contained both closed and open questions. RESULTS The response rate was 36.8% (68/185). Thirty-six of 68 editors (52.9%) stated they knew what a reporting guideline was before receiving the questionnaire. Editors said they had found out about reporting guidelines primarily through articles in other journals, via the Internet and through their own journal. Twenty of 57 respondents (35.1%) said their journal referred to reporting guidelines in its instructions to authors. CONSORT, REFLECT, and ARRIVE were the most frequently cited. Forty-four of 68 respondents (68.2%) believed that reporting guidelines should be adopted by all refereed veterinary journals. Qualitative analysis of the open questions revealed that lack of knowledge, fear, resistance to change, and difficulty in implementation were perceived as barriers to the adoption of reporting guidelines by journals. Editors suggested that reporting guidelines be promoted through communication and education of the veterinary community, with roles for the IAVE and universities. Many respondents believed a consensus policy on guideline implementation was needed for veterinary journals. CONCLUSIONS Further communication and education about reporting guidelines for editors, authors and reviewers has the potential to increase their adoption by veterinary journals in the future.
Collapse
Affiliation(s)
- Douglas JC Grindlay
- Centre for Evidence-based Veterinary Medicine, School of Veterinary Medicine and Science, The University of Nottingham, Sutton Bonington Campus, Loughborough LE12 5RD, UK
| | - Rachel S Dean
- Centre for Evidence-based Veterinary Medicine, School of Veterinary Medicine and Science, The University of Nottingham, Sutton Bonington Campus, Loughborough LE12 5RD, UK
| | - Mary M Christopher
- School of Veterinary Medicine, 4206 VM3A, University of California–Davis, One Shields Ave, Davis, CA 95616, USA
| | - Marnie L Brennan
- Centre for Evidence-based Veterinary Medicine, School of Veterinary Medicine and Science, The University of Nottingham, Sutton Bonington Campus, Loughborough LE12 5RD, UK
| |
Collapse
|
15
|
Korevaar DA, van Enst WA, Spijker R, Bossuyt PMM, Hooft L. Reporting quality of diagnostic accuracy studies: a systematic review and meta-analysis of investigations on adherence to STARD. ACTA ACUST UNITED AC 2013; 19:47-54. [PMID: 24368333 DOI: 10.1136/eb-2013-101637] [Citation(s) in RCA: 81] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
BACKGROUND Poor reporting of diagnostic accuracy studies impedes an objective appraisal of the clinical performance of diagnostic tests. The Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement, first published in 2003, aims to improve the reporting quality of such studies. OBJECTIVE To investigate to which extent published diagnostic accuracy studies adhere to the 25-item STARD checklist, whether the reporting quality has improved after STARD's launch and whether there are any factors associated with adherence. STUDY SELECTION We performed a systematic review and searched MEDLINE, EMBASE and the Methodology Register of the Cochrane Library for studies that primarily aimed to examine the reporting quality of articles on diagnostic accuracy studies in humans by evaluating adherence to STARD. Study selection was performed in duplicate; data were extracted by one author and verified by the second author. FINDINGS We included 16 studies, analysing 1496 articles in total. Three studies investigated adherence in a general sample of diagnostic accuracy studies; the others did so in a specific field of research. The overall mean number of items reported varied from 9.1 to 14.3 between 13 evaluations that evaluated all 25 STARD items. Six studies quantitatively compared post-STARD with pre-STARD articles. Combining these results in a random-effects meta-analysis revealed a modest but significant increase in adherence after STARD's introduction (mean difference 1.41 items (95% CI 0.65 to 2.18)). CONCLUSIONS The reporting quality of diagnostic accuracy studies was consistently moderate, at least through halfway the 2000s. Our results suggest a small improvement in the years after the introduction of STARD. Adherence to STARD should be further promoted among researchers, editors and peer reviewers.
Collapse
Affiliation(s)
- Daniël A Korevaar
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics (KEBB), Academic Medical Centre (AMC), University of Amsterdam (UvA), , Amsterdam, The Netherlands
| | | | | | | | | |
Collapse
|
16
|
Publication guidelines need widespread adoption. J Clin Epidemiol 2011; 65:239-46. [PMID: 22000815 DOI: 10.1016/j.jclinepi.2011.07.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2010] [Revised: 07/02/2011] [Accepted: 07/04/2011] [Indexed: 11/21/2022]
Abstract
OBJECTIVE During the past two decades teams of researchers and editors have developed a variety of publishing guidelines to improve the quality of published research reports. Journals and editorial groups have adopted many of these guidelines. Whereas some guidelines are widely used, others have yet to be generally applied, thwarting attainment of consistent reporting among published research reports. The aim of this study is to describe the development and adoption of general publication guidelines for various study designs, provide examples of guidelines adapted for specific topics, and recommend next steps. STUDY DESIGN AND SETTING We reviewed generic guidelines for reporting research results and surveyed their use in PubMed and Science Citation Index. RESULTS Existing guidelines cover a broad spectrum of research designs, but there are still gaps in topics and use. Appropriate next steps include increasing use of available guidelines and their adoption among journals, educating peer reviewers on their use, and incorporating guideline use into the curriculum of medical, nursing, and public health schools. CONCLUSION Wider adoption of existing guidelines should result in research that is increasingly reported in a standardized, consistent manner.
Collapse
|
17
|
Selman TJ, Morris RK, Zamora J, Khan KS. The quality of reporting of primary test accuracy studies in obstetrics and gynaecology: application of the STARD criteria. BMC WOMENS HEALTH 2011; 11:8. [PMID: 21429185 PMCID: PMC3072919 DOI: 10.1186/1472-6874-11-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2010] [Accepted: 03/23/2011] [Indexed: 11/10/2022]
Abstract
Background In obstetrics and gynaecology there has been a rapid growth in the development of new tests and primary studies of their accuracy. It is imperative that such studies are reported with transparency allowing the detection of any potential bias that may invalidate the results. The objective of this study was to determine the quality of reporting in diagnostic test accuracy studies in obstetrics and gynaecology using the Standards for Reporting of Diagnostic Accuracy - STARD checklist. Methods The included studies of ten systematic reviews were assessed for compliance with each of the reporting criteria. Using appropriate statistical tests we investigated whether there was an improvement in reporting quality since the introduction of the STARD checklist, whether a correlation existed between study sample size, country of origin of study and reporting quality. Results A total of 300 studies were included (195 for obstetrics, 105 for gynaecology). The overall reporting quality of included studies to the STARD criteria was poor. Obstetric studies reported adequately > 50% of the time for 62.1% (18/29) of the items while gynaecologic studies did the same 51.7% (15/29). There was a greater mean compliance with STARD criteria in the included obstetric studies than the gynaecological (p < 0.0001). There was a positive correlation, in both obstetrics (p < 0.0001) and gynaecology (p = 0.0123), between study sample size and reporting quality. No correlation between geographical area of publication and compliance with the reporting criteria could be demonstrated. Conclusions The reporting quality of papers in obstetrics and gynaecology is improving. This may be due to initiatives such as the STARD checklist as well as historical progress in awareness among authors of the need to accurately report studies. There is however considerable scope for further improvement.
Collapse
Affiliation(s)
- Tara J Selman
- School of Clinical and Experimental Medicine (Reproduction, Genes and Development), University of Birmingham, Birmingham Women's Hospital, Birmingham, B15 2TG, UK
| | | | | | | |
Collapse
|
18
|
Sperm chromatin structure assay and classical semen parameters: systematic review. Reprod Biomed Online 2010; 20:114-24. [DOI: 10.1016/j.rbmo.2009.10.024] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2009] [Revised: 04/09/2009] [Accepted: 09/23/2009] [Indexed: 11/19/2022]
|
19
|
Improved reporting of statistical design and analysis: guidelines, education, and editorial policies. Methods Mol Biol 2010; 620:563-98. [PMID: 20652522 DOI: 10.1007/978-1-60761-580-4_22] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
A majority of original articles published in biomedical journals include some form of statistical analysis. Unfortunately, many of the articles contain errors in statistical design and/or analysis. These errors are worrisome, as the misuse of statistics jeopardizes the process of scientific discovery and the accumulation of scientific knowledge. To help avoid these errors and improve statistical reporting, four approaches are suggested: (1) development of guidelines for statistical reporting that could be adopted by all journals, (2) improvement in statistics curricula in biomedical research programs with an emphasis on hands-on teaching by biostatisticians, (3) expansion and enhancement of biomedical science curricula in statistics programs, and (4) increased participation of biostatisticians in the peer review process along with the adoption of more rigorous journal editorial policies regarding statistics. In this chapter, we provide an overview of these issues with emphasis to the field of molecular biology and highlight the need for continuing efforts on all fronts.
Collapse
|
20
|
Gómez Sáez N, Hernández-Aguado I, Lumbreras B. Estudio observacional: evaluación de la calidad metodológica de la investigación diagnóstica en España tras la publicación de la guía STARD. Med Clin (Barc) 2009; 133:302-10. [DOI: 10.1016/j.medcli.2008.10.043] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2008] [Accepted: 10/17/2008] [Indexed: 11/16/2022]
|
21
|
Broeze KA, Opmeer BC, Bachmann LM, Broekmans FJ, Bossuyt PMM, Coppus SFPJ, Johnson NP, Khan KS, ter Riet G, van der Veen F, van Wely M, Mol BWJ. Individual patient data meta-analysis of diagnostic and prognostic studies in obstetrics, gynaecology and reproductive medicine. BMC Med Res Methodol 2009; 9:22. [PMID: 19327146 PMCID: PMC2667527 DOI: 10.1186/1471-2288-9-22] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2008] [Accepted: 03/27/2009] [Indexed: 01/01/2023] Open
Abstract
Background In clinical practice a diagnosis is based on a combination of clinical history, physical examination and additional diagnostic tests. At present, studies on diagnostic research often report the accuracy of tests without taking into account the information already known from history and examination. Due to this lack of information, together with variations in design and quality of studies, conventional meta-analyses based on these studies will not show the accuracy of the tests in real practice. By using individual patient data (IPD) to perform meta-analyses, the accuracy of tests can be assessed in relation to other patient characteristics and allows the development or evaluation of diagnostic algorithms for individual patients. In this study we will examine these potential benefits in four clinical diagnostic problems in the field of gynaecology, obstetrics and reproductive medicine. Methods/design Based on earlier systematic reviews for each of the four clinical problems, studies are considered for inclusion. The first authors of the included studies will be invited to participate and share their original data. After assessment of validity and completeness the acquired datasets are merged. Based on these data, a series of analyses will be performed, including a systematic comparison of the results of the IPD meta-analysis with those of a conventional meta-analysis, development of multivariable models for clinical history alone and for the combination of history, physical examination and relevant diagnostic tests and development of clinical prediction rules for the individual patients. These will be made accessible for clinicians. Discussion The use of IPD meta-analysis will allow evaluating accuracy of diagnostic tests in relation to other relevant information. Ultimately, this could increase the efficiency of the diagnostic work-up, e.g. by reducing the need for invasive tests and/or improving the accuracy of the diagnostic workup. This study will assess whether these benefits of IPD meta-analysis over conventional meta-analysis can be exploited and will provide a framework for future IPD meta-analyses in diagnostic and prognostic research.
Collapse
Affiliation(s)
- Kimiko A Broeze
- Centre for Reproductive Medicine, Department of Obstetrics and Gynaecology, Academic Medical Centre (AMC), Amsterdam, The Netherlands.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Wilczynski NL. Quality of reporting of diagnostic accuracy studies: no change since STARD statement publication--before-and-after study. Radiology 2008; 248:817-23. [PMID: 18710977 DOI: 10.1148/radiol.2483072067] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE To determine the quality of reporting of diagnostic accuracy studies before and after the Standards for Reporting of Diagnostic Accuracy (STARD) statement publication and to determine whether there is a difference in the quality of reporting by comparing STARD (endorsing) and non-STARD (nonendorsing) journals. MATERIALS AND METHODS Diagnostic accuracy studies were identified by hand searching six STARD and six non-STARD journals for 2001, 2002, 2004, and 2005. Diagnostic accuracy studies (n = 240) were assessed by using a checklist of 13 of 25 STARD items. The change in the mean total score on the modified STARD checklist was evaluated with analysis of covariance. The change in proportion of times that each individual STARD item was reported before and after STARD statement publication was evaluated (chi(2) tests for linear trend). RESULTS With mean total score as dependent factor, analysis of covariance showed that the interaction between the two independent factors (STARD or non-STARD journal and year of publication) was not significant (F = 0.664, df = 3, partial eta(2) = 0.009, P = .58). Additionally, the frequency with which individual items on the STARD checklist were reported before and after STARD statement publication has remained relatively constant, with little difference between STARD and non-STARD journals. CONCLUSION After publication of the STARD statement in 2003, the quality of reporting of diagnostic accuracy studies remained similar to pre-STARD statement publication levels, and there was no meaningful difference (ie, one additional item on the checklist of 13 of 25 STARD items being reported) in the quality of reporting between those journals that published the STARD statement and those that did not.
Collapse
Affiliation(s)
- Nancy L Wilczynski
- Health Information Research Unit, McMaster University, 1200 Main St West, HSC-3H7, Hamilton, ON, Canada L8N 3Z5.
| |
Collapse
|
23
|
Zafar A, Khan GI, Siddiqui MAR. The quality of reporting of diagnostic accuracy studies in diabetic retinopathy screening: a systematic review. Clin Exp Ophthalmol 2008; 36:537-42. [DOI: 10.1111/j.1442-9071.2008.01826.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
24
|
Rifai N, Altman DG, Bossuyt PM. Reporting Bias in Diagnostic and Prognostic Studies: Time for Action. Clin Chem 2008; 54:1101-3. [PMID: 18593957 DOI: 10.1373/clinchem.2008.108993] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Nader Rifai
- Departments of Laboratory Medicine and Pathology , Children’s Hospital and Harvard Medical School, Boston, MA
| | | | - Patrick M Bossuyt
- Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|