1
|
Reporting of Artificial Intelligence Diagnostic Accuracy Studies in Pathology Abstracts: Compliance with STARD for Abstracts Guidelines. J Pathol Inform 2022; 13:100091. [PMID: 36268103 PMCID: PMC9576989 DOI: 10.1016/j.jpi.2022.100091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Revised: 01/21/2022] [Accepted: 01/27/2022] [Indexed: 11/24/2022] Open
Abstract
Artificial intelligence (AI) research is transforming the range tools and technologies available to pathologists, leading to potentially faster, personalized and more accurate diagnoses for patients. However, to see the use of tools for patient benefit and achieve this safely, the implementation of any algorithm must be underpinned by high quality evidence from research that is understandable, replicable, usable and inclusive of details needed for critical appraisal of potential bias. Evidence suggests that reporting guidelines can improve the completeness of reporting of research, especially with good awareness of guidelines. The quality of evidence provided by abstracts alone is profoundly important, as they influence the decision of a researcher to read a paper, attend a conference presentation or include a study in a systematic review. AI abstracts at two international pathology conferences were assessed to establish completeness of reporting against the STARD for Abstracts criteria. This reporting guideline is for abstracts of diagnostic accuracy studies and includes a checklist of 11 essential items required to accomplish satisfactory reporting of such an investigation. A total of 3488 abstracts were screened from the United States & Canadian Academy of Pathology annual meeting 2019 and the 31st European Congress of Pathology (ESP Congress). Of these, 51 AI diagnostic accuracy abstracts were identified and assessed against the STARD for Abstracts criteria for completeness of reporting. Completeness of reporting was suboptimal for the 11 essential criteria, a mean of 5.8 (SD 1.5) items were detailed per abstract. Inclusion was variable across the different checklist items, with all abstracts including study objectives and no abstracts including a registration number or registry. Greater use and awareness of the STARD for Abstracts criteria could improve completeness of reporting and further consideration is needed for areas where AI studies are vulnerable to bias.
Collapse
|
2
|
Iafolla MAJ, Picardo S, Aung K, Hansen AR. Systematic Review and STARD Scoring of Renal Cell Carcinoma Circulating Diagnostic Biomarker Manuscripts. JNCI Cancer Spectr 2020; 4:pkaa050. [PMID: 33134830 PMCID: PMC7583155 DOI: 10.1093/jncics/pkaa050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 05/14/2020] [Accepted: 06/01/2020] [Indexed: 01/23/2023] Open
Abstract
Background No validated molecular biomarkers exist to help guide diagnosis of renal cell carcinoma (RCC) patients. We seek to evaluate the quality of published RCC circulating diagnostic biomarker manuscripts using the Standards for Reporting of Diagnostic Accuracy Studies (STARD) guidelines. Methods The phrase “(renal cell carcinoma OR renal cancer OR kidney cancer OR kidney carcinoma) AND circulating AND (biomarkers OR cell free DNA OR tumor DNA OR methylated cell free DNA OR methylated tumor DNA)” was searched in Embase, MEDLINE, and PubMed in March 2018. Relevant manuscripts were scored using 41 STARD subcriteria for a maximal score of 26 points. All tests of statistical significance were 2 sided. Results The search identified 535 publications: 27 manuscripts of primary research were analyzed. The median STARD score was 11.5 (range = 7-16.75). All manuscripts had appropriate abstracts, introductions, and distribution of alternative diagnoses. None of the manuscripts stated how indeterminant data were handled or if adverse events occurred from performing the index test or reference standard. Statistically significantly higher STARD scores were present in manuscripts reporting receiver operator characteristic curves (P < .001), larger sample sizes (P = .007), and after release of the original STARD statement (P = .005). Conclusions Most RCC circulating diagnostic biomarker manuscripts poorly adhere to the STARD guidelines. Future studies adhering to STARD guidelines may address this unmet need.
Collapse
Affiliation(s)
- Marco A J Iafolla
- Division of Medical Oncology and Hematology, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.,University of Toronto, Toronto, Ontario, Canada.,Division of Oncology, William Osler Health System, Brampton, Ontario, Canada
| | - Sarah Picardo
- Division of Medical Oncology and Hematology, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.,University of Toronto, Toronto, Ontario, Canada
| | - Kyaw Aung
- Division of Medical Oncology and Hematology, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.,University of Toronto, Toronto, Ontario, Canada.,Livestrong Cancer Institute and Dell Medical School, University of Texas at Austin, Austin, TX, USA
| | - Aaron R Hansen
- Division of Medical Oncology and Hematology, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.,University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Friedman AB, Berning AW, Marill KA. Confidence at 100%: Characteristics of Likelihood Ratio Confidence Intervals in the Emergency Medicine Diagnostics Literature. Acad Emerg Med 2020; 27:897-904. [PMID: 32011039 DOI: 10.1111/acem.13930] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Revised: 12/19/2019] [Accepted: 01/28/2020] [Indexed: 11/30/2022]
Abstract
OBJECTIVE We hypothesized that "perfect" 100% sample sensitivity or specificity (PSSS) is common in the emergency medicine (EM) literature. When results yield PSSS, calculating the likelihood ratio (LR) 95% confidence interval (CI) has been challenging. Consequently, we also hypothesized that studies with PSSS would be less likely to report the LR and associated CI, and those that did would use imperfect methods. METHODS We searched PubMed or Scopus for all articles reporting diagnostic test results in the 20 top EM journals from 2011 to 2016 and randomly sampled 124 articles. Trained researchers coded the articles as having PSSS or not ("controls"). We separately sampled 100 articles with PSSS and compared them to 100 controls in terms of their reporting of diagnostic tests and associated CIs. RESULTS Of the 124 articles, 19.4% (95% CI = 13% to 27.6%) feature a diagnostic test with PSSS. The LR is reported significantly less often in PSSS studies versus control studies: 18 of 100 articles (18% [95% CI = 11.3% to 27.2%]) versus 34 of 100 articles (34% [95% CI = 25% to 44.2%]), with an odds ratio (OR) of 0.43 (95% CI = 0.21 to 0.86). The LR 95% CI is also reported less often in PSSS versus control studies: five of 100 articles (5% [95% CI = 1.9% to 11.8%]) versus 27 of 100 articles (27% [95% CI = 18.8% to 37%]), with an OR of 0.11 (95% CI = 0.02 to 0.44). Five articles with perfect sample sensitivity reported their negative LR CI. The bootstrap method resulted in CIs that were 42.7% smaller on average (range = 16.6% to 63.6%). CONCLUSION This analysis provides systematic evidence of diagnostic test reporting in the EM literature. Sample sensitivity or specificity of 100% is common. LRs and their associated 95% CIs are infrequently reported, particularly for PSSS samples. When the LR CI is reported in this scenario, it is overly wide. Improved reporting and methods can enhance the utility and confidence in diagnostic tests in EM.
Collapse
Affiliation(s)
- Ari B Friedman
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA
| | - Aric W Berning
- University of Pittsburgh School of Medicine, Pittsburgh, PA
| | - Keith A Marill
- Department of Emergency Medicine, Harvard Medical School, Boston, MA
| |
Collapse
|
4
|
Thiessen M, Vogel JA, Byyny RL, Hopkins E, Haukoos JS, Kendall JL, Trent SA. Emergency Ultrasound Literature and Adherence to Standards for Reporting of Diagnostic Accuracy Criteria. J Emerg Med 2019; 58:636-646. [PMID: 31708317 DOI: 10.1016/j.jemermed.2019.09.029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 09/16/2019] [Accepted: 09/20/2019] [Indexed: 01/02/2023]
Abstract
BACKGROUND Given the wide usage of emergency point-of-care ultrasound (EUS) among emergency physicians (EPs), rigorous study surrounding its accuracy is essential. The Standards for Reporting of Diagnostic Accuracy (STARD) criteria were established to ensure robust reporting methodology for diagnostic studies. Adherence to the STARD criteria among EUS diagnostic studies has yet to be reported. OBJECTIVES Our objective was to evaluate a body of EUS literature shortly after STARD publication for its baseline adherence to the STARD criteria. METHODS EUS studies in 5 emergency medicine journals from 2005-2010 were evaluated for their adherence to the STARD criteria. Manuscripts were selected for inclusion if they reported original research and described the use of 1 of 10 diagnostic ultrasound modalities designated as "core emergency ultrasound applications" in the 2008 American College of Emergency Physicians Ultrasound Guidelines. Literature search identified 307 studies; of these, 45 met inclusion criteria for review. RESULTS The median STARD score was 15 (interquartile range [IQR] 12-17), representing 60% of the 25 total STARD criteria. The median STARD score among articles that reported diagnostic accuracy was significantly higher than those that did not report accuracy (17 [IQR 15-19] vs. 11 [IQR 9-13], respectively; p < 0.0001). Seventy-one percent of articles met ≥50% of the STARD criteria (56-84%) and 4% met >80% of the STARD criteria. CONCLUSIONS Significant opportunities exist to improve methodological reporting of EUS research. Increased adherence to the STARD criteria among diagnostic EUS studies will improve reporting and improve our ability to compare outcomes.
Collapse
Affiliation(s)
- Molly Thiessen
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| | - Jody A Vogel
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| | - Richard L Byyny
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| | - Emily Hopkins
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado
| | - Jason S Haukoos
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado; Department of Epidemiology, Colorado School of Public Health, Aurora, Colorado
| | - John L Kendall
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| | - Stacy A Trent
- Department of Emergency Medicine, Denver Health Medical Center, Denver, Colorado; Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, Colorado
| |
Collapse
|
5
|
Blanco D, Altman D, Moher D, Boutron I, Kirkham JJ, Cobo E. Scoping review on interventions to improve adherence to reporting guidelines in health research. BMJ Open 2019; 9:e026589. [PMID: 31076472 PMCID: PMC6527996 DOI: 10.1136/bmjopen-2018-026589] [Citation(s) in RCA: 96] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
OBJECTIVES The goal of this study is to identify, analyse and classify interventions to improve adherence to reporting guidelines in order to obtain a wide picture of how the problem of enhancing the completeness of reporting of biomedical literature has been tackled so far. DESIGN Scoping review. SEARCH STRATEGY We searched the MEDLINE, EMBASE and Cochrane Library databases and conducted a grey literature search for (1) studies evaluating interventions to improve adherence to reporting guidelines in health research and (2) other types of references describing interventions that have been performed or suggested but never evaluated. The characteristics and effect of the evaluated interventions were analysed. Moreover, we explored the rationale of the interventions identified and determined the existing gaps in research on the evaluation of interventions to improve adherence to reporting guidelines. RESULTS 109 references containing 31 interventions (11 evaluated) were included. These were grouped into five categories: (1) training on the use of reporting guidelines, (2) improving understanding, (3) encouraging adherence, (4) checking adherence and providing feedback, and (5) involvement of experts. Additionally, we identified lack of evaluated interventions (1) on training on the use of reporting guidelines and improving their understanding, (2) at early stages of research and (3) after the final acceptance of the manuscript. CONCLUSIONS This scoping review identified a wide range of strategies to improve adherence to reporting guidelines that can be taken by different stakeholders. Additional research is needed to assess the effectiveness of many of these interventions.
Collapse
Affiliation(s)
- David Blanco
- Statistics and Operations Research, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Doug Altman
- Nuffield Department ofOrthopaedics, Rheumatologyand Musculoskeletal Sciences,Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| | - David Moher
- Centre for Journalology, Ottawa Hospital Research Institute, Ottawa, Canada
| | - Isabelle Boutron
- Centre d\'épidémiologie Clinique, Université Paris Descartes, Paris, France
| | - Jamie J Kirkham
- Biostatistics, University of Liverpool, Liverpool, Merseyside, UK
| | - Erik Cobo
- Statistics and Operations Research, Universitat Politècnica de Catalunya, Barcelona, Spain
| |
Collapse
|
6
|
Zarei F, Zeinali-Rafsanjani B. Assessment of Adherence of Diagnostic Accuracy Studies Published in Radiology Journals to STARD Statement Indexed in Web of Science, PubMed & Scopus in 2015. J Biomed Phys Eng 2018; 8:311-324. [PMID: 30320035 PMCID: PMC6169121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2017] [Accepted: 10/15/2017] [Indexed: 11/01/2022]
Abstract
RATIONALE AND OBJECTIVE The objective of this study is to evaluate the methodological adherence of diagnostic accuracy studies published in radiology journals, which were indexed in different databases with the STARD standard guide 2015. MATERIALS AND METHODS The different databases were searched in order to find suitable journals. Among 84 English radiology journals, 31 journal were selected randomly. In order to find the articles, the same search fields and search terms were used. All the items of STARD checklist 2015 were considered to take in to account in assessment of the adherence of the articles to the standard. Total STARD score for each article was calculated by summing the number of reported items. RESULTS 151 articles from 31 journals were evaluated to check the adherence of their structure to STARD standard. Based on the results the articles had the most adherence with the STARD standard in material and method part the item of participants, discussion section, and title or abstract. On the contrary, most of the articles were not adhere to other information which are new items in STARD 2015. Among radiology diagnostic accuracy articles only one article (0.66%) had a registration number and 10 (6.62%) articles had a link to full study protocol. More than 60% of articles adhered to the ethics (69.54%) and source of support (63.58%). CONCLUSIONS The radiology diagnostic accuracy studies were adhered to 69.45% STARD items, which shows an improvement in reporting the diagnostic accuracy articles in comparison to previous studies.
Collapse
Affiliation(s)
- F. Zarei
- Department of Medical Journalism, School of Para-Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
,Medical Imaging Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - B. Zeinali-Rafsanjani
- Medical Imaging Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
7
|
Baron BJ, Benabbas R, Kohler C, Biggs C, Roudnitsky V, Paladino L, Sinert R. Accuracy of Computed Tomography in Diagnosis of Intra-abdominal Injuries in Stable Patients With Anterior Abdominal Stab Wounds: A Systematic Review and Meta-analysis. Acad Emerg Med 2018; 25:744-757. [PMID: 29369452 DOI: 10.1111/acem.13380] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 01/09/2018] [Accepted: 01/15/2018] [Indexed: 11/28/2022]
Abstract
BACKGROUND Workup for patients presenting to the emergency department (ED) following an anterior abdominal stab wound (AASW) has been debated since the 1960s. Experts agree that patients with peritonitis, evisceration, or hemodynamic instability should undergo immediate laparotomy (LAP); however, workup of stable, asymptomatic or nonperitoneal patients is not clearly defined. OBJECTIVES The objective was to evaluate the accuracy of computed tomography of abdomen and pelvis (CTAP) for diagnosis of intraabdominal injuries requiring therapeutic laparotomy (THER-LAP) in ED patients with AASW. Is a negative CT scan without a period of observation sufficient to safely discharge a hemodynamically stable, asymptomatic AASW patient? METHODS We searched PubMed, Embase, and Scopus from their inception until May 2017 for studies on ED patients with AASW. We defined the reference standard test as LAP for patients who were managed surgically and inpatient observation in those who were managed nonoperatively. In those who underwent LAP, THER-LAP was considered as disease positive. We used the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS-2) to evaluate the risk of bias and assess the applicability of the included studies. We attempted to compute the pooled sensitivity, specificity, positive likelihood ratio (LR+), and negative likelihood ratio (LR-) using a random-effects model with MetaDiSc software and calculate testing and treatment thresholds for CT scan applying the Pauker and Kassirer model. RESULTS Seven studies were included encompassing 575 patients. The weighted prevalence of THER-LAP was 34.3% (95% confidence interval [CI] = 30.5%-38.2%). Studies had variable quality and the inclusion criteria were not uniform. The operating characteristics of CT scan were as follows: sensitivity = 50% to 100%, specificity = 39% to 97%, LR+ = 1.0 to 15.7, and LR- = 0.07 to 1.0. The high heterogeneity (I2 > 75%) of the operating characteristics of CT scan prevented pooling of the data and therefore the testing and treatment thresholds could not be estimated. DISCUSSION The articles revealed a high prevalence (8.7%, 95% CI = 6.1%-12.2%) of injuries requiring THER-LAP in patients with a negative CT scan and almost half (47%, 95% CI = 30%-64%) of those injuries involved the small bowel. CONCLUSIONS In stable AASW patients, a negative CT scan alone without an observation period is inadequate to exclude significant intraabdominal injuries.
Collapse
Affiliation(s)
- Bonny J. Baron
- Department of Emergency Medicine State University of New York Downstate Medical Center Brooklyn NY
- Department of Emergency Medicine Kings County Hospital Center Brooklyn NY
| | - Roshanak Benabbas
- Department of Emergency Medicine State University of New York Downstate Medical Center Brooklyn NY
- Department of Emergency Medicine Kings County Hospital Center Brooklyn NY
| | - Casey Kohler
- Division of Surgical Critical Care/Department of Surgery State University of New York Downstate Medical Center Brooklyn NY
- Department of Surgery Kings County Hospital Center Brooklyn NY
| | - Carina Biggs
- Division of Surgical Critical Care/Department of Surgery State University of New York Downstate Medical Center Brooklyn NY
- Department of Surgery Kings County Hospital Center Brooklyn NY
| | - Valery Roudnitsky
- Division of Surgical Critical Care/Department of Surgery State University of New York Downstate Medical Center Brooklyn NY
- Department of Surgery Kings County Hospital Center Brooklyn NY
| | - Lorenzo Paladino
- Department of Emergency Medicine State University of New York Downstate Medical Center Brooklyn NY
- Department of Emergency Medicine Kings County Hospital Center Brooklyn NY
| | - Richard Sinert
- Department of Emergency Medicine State University of New York Downstate Medical Center Brooklyn NY
- Department of Emergency Medicine Kings County Hospital Center Brooklyn NY
| |
Collapse
|
8
|
Grob ATM, van der Vaart LR, Withagen MIJ, van der Vaart CH. Quality of reporting of diagnostic accuracy studies on pelvic floor three-dimensional transperineal ultrasound: a systematic review. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2017; 50:451-457. [PMID: 28000958 DOI: 10.1002/uog.17390] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Revised: 12/02/2016] [Accepted: 12/13/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE In recent years, a large number of studies have been published on the clinical relevance of pelvic floor three-dimensional (3D) transperineal ultrasound. Several studies compare sonography with other imaging modalities or clinical examination. The quality of reporting in these studies is not known. The objective of this systematic review was to determine the compliance of diagnostic accuracy studies investigating pelvic floor 3D ultrasound with the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines. METHODS Published articles on pelvic floor 3D ultrasound were identified by a systematic literature search of MEDLINE, Web of Science and Scopus databases. Prospective and retrospective studies that compared pelvic floor 3D ultrasound with other clinical and imaging diagnostics were included in the analysis. STARD compliance was assessed and quantified by two independent investigators, using 22 of the original 25 STARD checklist items. Items with the qualifier 'if done' (Items 13, 23 and 24) were excluded because they were not applicable to all papers. Each item was scored as reported (score = 1) or not reported (score = 0). Observer variability, the total number of reported STARD items per article and summary scores for each item were calculated. The difference in total score between STARD-adopting and non-adopting journals was tested statistically, as was the effect of year of publication. RESULTS Forty studies published in 13 scientific journals were included in the analysis. Mean ± SD STARD checklist score of the included articles was 16.0 ± 2.5 out of a maximum of 22 points. The lowest scores (< 50%) were found for reporting of handling of indeterminate results or missing responses, adverse events and the time interval between tests. Interobserver agreement for rating the STARD items was excellent (intraclass correlation coefficient, 0.77). An independent t-test showed no significant mean difference ± SD in total STARD checklist score between STARD-adopting and non-adopting journals (16.4 ± 2.2 vs 15.9 ± 2.6, respectively). Mean ± SD STARD checklist score for articles published in 2003-2009 was lower, but not statistically different, compared with those published in 2010-2015 (15.2 ± 2.5 vs 16.6 ± 2.4, respectively). CONCLUSION The overall compliance with reporting guidelines of diagnostic accuracy studies on pelvic floor 3D transperineal ultrasound is relatively good compared with other fields of medicine. However, specific checklist items require more attention when reported. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- A T M Grob
- Department of Reproductive Medicine and Gynecology, University Medical Center, Utrecht, The Netherlands
- MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, The Netherlands
| | | | - M I J Withagen
- Department of Reproductive Medicine and Gynecology, University Medical Center, Utrecht, The Netherlands
| | - C H van der Vaart
- Department of Reproductive Medicine and Gynecology, University Medical Center, Utrecht, The Netherlands
| |
Collapse
|
9
|
Gallo L, Hua N, Mercuri M, Silveira A, Worster A. Adherence to Standards for Reporting Diagnostic Accuracy in Emergency Medicine Research. Acad Emerg Med 2017. [PMID: 28621810 DOI: 10.1111/acem.13233] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
BACKGROUND Diagnostic tests are used frequently in the emergency department (ED) to guide clinical decision making and, hence, influence clinical outcomes. The Standards for Reporting of Diagnostic Accuracy (STARD) criteria were developed to ensure that diagnostic test studies are performed and reported to best inform clinical decision making in the ED. OBJECTIVE The objective was to determine the extent to which diagnostic studies published in emergency medicine journals adhered to STARD 2003 criteria. METHODS Diagnostic studies published in eight MEDLINE-listed, peer-reviewed, emergency medicine journals over a 5-year period were reviewed for compliance to STARD criteria. RESULTS A total of 12,649 articles were screened and 114 studies were included in our study. Twenty percent of these were randomly selected for assessment using STARD 2003 criteria. Adherence to STARD 2003 reporting standards for each criteria ranged from 8.7% adherence (criteria-reporting adverse events from performing index test or reference standard) to 100% (multiple criteria). CONCLUSION Just over half of STARD criteria are reported in more than 80% studies. As poorly reported studies may negatively impact their clinical usefulness, it is essential that studies of diagnostic test accuracy be performed and reported adequately. Future studies should assess whether studies have improved compliance with the STARD 2015 criteria amendment.
Collapse
Affiliation(s)
- Lucas Gallo
- Faculty of Medicine; McMaster University; Hamilton Ontario
| | - Nadia Hua
- Faculty of Medicine; University of Ottawa; Ottawa Ontario
| | - Mathew Mercuri
- Division of Emergency Medicine; Department of Medicine; McMaster University; Hamilton Ontario
| | - Angela Silveira
- Department of Public Health; Johns Hopkins University; Baltimore MD
| | - Andrew Worster
- Division of Emergency Medicine; Department of Medicine; McMaster University; Hamilton Ontario
| | | |
Collapse
|
10
|
Sekula P, Mallett S, Altman DG, Sauerbrei W. Did the reporting of prognostic studies of tumour markers improve since the introduction of REMARK guideline? A comparison of reporting in published articles. PLoS One 2017; 12:e0178531. [PMID: 28614415 PMCID: PMC5470677 DOI: 10.1371/journal.pone.0178531] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Accepted: 05/15/2017] [Indexed: 01/07/2023] Open
Abstract
Although biomarkers are perceived as highly relevant for future clinical practice, few biomarkers reach clinical utility for several reasons. Among them, poor reporting of studies is one of the major problems. To aid improvement, reporting guidelines like REMARK for tumour marker prognostic (TMP) studies were introduced several years ago. The aims of this project were to assess whether reporting quality of TMP-studies improved in comparison to a previously conducted study assessing reporting quality of TMP-studies (PRE-study) and to assess whether articles citing REMARK (citing group) are better reported, in comparison to articles not citing REMARK (not-citing group). For the POST-study, recent articles citing and not citing REMARK (53 each) were identified in selected journals through systematic literature search and evaluated in same way as in the PRE-study. Ten of the 20 items of the REMARK checklist were evaluated and used to define an overall score of reporting quality. The observed overall scores were 53.4% (range: 10%-90%) for the PRE-study, 57.7% (range: 20%-100%) for the not-citing group and 58.1% (range: 30%-100%) for the citing group of the POST-study. While there is no difference between the two groups of the POST-study, the POST-study shows a slight but not relevant improvement in reporting relative to the PRE-study. Not all the articles of the citing group, cited REMARK appropriately. Irrespective of whether REMARK was cited, the overall score was slightly higher for articles published in journals requesting adherence to REMARK than for those published in journals not requesting it: 59.9% versus 51.9%, respectively. Several years after the introduction of REMARK, many key items of TMP-studies are still very poorly reported. A combined effort is needed from authors, editors, reviewers and methodologists to improve the current situation. Good reporting is not just nice to have but is essential for any research to be useful.
Collapse
Affiliation(s)
- Peggy Sekula
- Institute for Medical Biometry and Statistics, Faculty of Medicine and Medical Center-University of Freiburg, Freiburg, Germany
| | - Susan Mallett
- Institute of Applied Health Research, University of Birmingham, Edgbaston, Birmingham, United Kingdom
| | - Douglas G Altman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, United Kingdom
| | - Willi Sauerbrei
- Institute for Medical Biometry and Statistics, Faculty of Medicine and Medical Center-University of Freiburg, Freiburg, Germany
| |
Collapse
|
11
|
Dilauro M, McInnes MDF, Korevaar DA, van der Pol CB, Petrcich W, Walther S, Quon J, Kurowecki D, Bossuyt PMM. Is There an Association between STARD Statement Adherence and Citation Rate? Radiology 2016; 280:62-7. [DOI: 10.1148/radiol.2016151384] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
12
|
Stevens A, Shamseer L, Weinstein E, Yazdi F, Turner L, Thielman J, Altman DG, Hirst A, Hoey J, Palepu A, Schulz KF, Moher D. Relation of completeness of reporting of health research to journals' endorsement of reporting guidelines: systematic review. BMJ 2014; 348:g3804. [PMID: 24965222 PMCID: PMC4070413 DOI: 10.1136/bmj.g3804] [Citation(s) in RCA: 181] [Impact Index Per Article: 18.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/27/2014] [Indexed: 12/16/2022]
Abstract
OBJECTIVE To assess whether the completeness of reporting of health research is related to journals' endorsement of reporting guidelines. DESIGN Systematic review. DATA SOURCES Reporting guidelines from a published systematic review and the EQUATOR Network (October 2011). Studies assessing the completeness of reporting by using an included reporting guideline (termed "evaluations") (1990 to October 2011; addendum searches in January 2012) from searches of either Medline, Embase, and the Cochrane Methodology Register or Scopus, depending on reporting guideline name. STUDY SELECTION English language reporting guidelines that provided explicit guidance for reporting, described the guidance development process, and indicated use of a consensus development process were included. The CONSORT statement was excluded, as evaluations of adherence to CONSORT had previously been reviewed. English or French language evaluations of included reporting guidelines were eligible if they assessed the completeness of reporting of studies as a primary intent and those included studies enabled the comparisons of interest (that is, after versus before journal endorsement and/or endorsing versus non-endorsing journals). DATA EXTRACTION Potentially eligible evaluations of included guidelines were screened initially by title and abstract and then as full text reports. If eligibility was unclear, authors of evaluations were contacted; journals' websites were consulted for endorsement information where needed. The completeness of reporting of reporting guidelines was analyzed in relation to endorsement by item and, where consistent with the authors' analysis, a mean summed score. RESULTS 101 reporting guidelines were included. Of 15,249 records retrieved from the search for evaluations, 26 evaluations that assessed completeness of reporting in relation to endorsement for nine reporting guidelines were identified. Of those, 13 evaluations assessing seven reporting guidelines (BMJ economic checklist, CONSORT for harms, PRISMA, QUOROM, STARD, STRICTA, and STROBE) could be analyzed. Reporting guideline items were assessed by few evaluations. CONCLUSIONS The completeness of reporting of only nine of 101 health research reporting guidelines (excluding CONSORT) has been evaluated in relation to journals' endorsement. Items from seven reporting guidelines were quantitatively analyzed, by few evaluations each. Insufficient evidence exists to determine the relation between journals' endorsement of reporting guidelines and the completeness of reporting of published health research reports. Journal editors and researchers should consider collaborative prospectively designed, controlled studies to provide more robust evidence. SYSTEMATIC REVIEW REGISTRATION Not registered; no known register currently accepts protocols for methodology systematic reviews.
Collapse
Affiliation(s)
- Adrienne Stevens
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6
| | - Larissa Shamseer
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6 Department of Epidemiology and Community Medicine, University of Ottawa, K1H 8M5 Ottawa, Canada
| | - Erica Weinstein
- Albert Einstein College of Medicine, Yeshiva University, Bronx, NY 10461, USA
| | - Fatemeh Yazdi
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6
| | - Lucy Turner
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6
| | - Justin Thielman
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6
| | - Douglas G Altman
- Centre for Statistics in Medicine, University of Oxford, Oxford OX3 7LD, UK
| | - Allison Hirst
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford OX3 9DU, UK
| | - John Hoey
- Population and Public Health Initiative, Queen's University, Kingston, ON, Canada, K7L 3N6
| | - Anita Palepu
- Centre for Health Evaluation and Outcome Sciences, St Paul's Hospital, Vancouver, BC, Canada, V6Z 1Y9 Department of Medicine, University of British Columbia, Vancouver, BC, Canada, V5Z 1M9
| | - Kenneth F Schulz
- International Clinical Sciences Support Center, FHI 360, Durham, NC 27713, USA
| | - David Moher
- Centre for Practice-Changing Research, Ottawa Hospital Research Institute, Ottawa, ON, Canada, K1H 8L6 Department of Epidemiology and Community Medicine, University of Ottawa, K1H 8M5 Ottawa, Canada
| |
Collapse
|
13
|
Scientific reporting is suboptimal for aspects that characterize genetic risk prediction studies: a review of published articles based on the Genetic RIsk Prediction Studies statement. J Clin Epidemiol 2014; 67:487-99. [DOI: 10.1016/j.jclinepi.2013.10.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2013] [Revised: 10/03/2013] [Accepted: 10/09/2013] [Indexed: 12/29/2022]
|
14
|
Roysri K, Chotipanich C, Laopaiboon V, Khiewyoo J. Quality Assessment of Research Articles in Nuclear Medicine Using STARD and QUADAS-2 Tools. ASIA OCEANIA JOURNAL OF NUCLEAR MEDICINE & BIOLOGY 2014; 2:120-6. [PMID: 27408868 PMCID: PMC4937696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
OBJECTIVES Diagnostic nuclear medicine is being increasingly employed in clinical practice with the advent of new technologies and radiopharmaceuticals. The report of the prevalence of a certain disease is important for assessing the quality of that article. Therefore, this study was performed to evaluate the quality of published nuclear medicine articles and determine the frequency of reporting the prevalence of studied diseases. METHODS We used Standards for Reporting of Diagnostic Accuracy (STARD) and Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) checklists for evaluating the quality of articles published in five nuclear medicine journals with the highest impact factors in 2012. The articles were retrieved from Scopus database and were selected and assessed independently by two nuclear medicine physicians. Decision concerning equivocal data was made by consensus between the reviewers. RESULTS The average STARD score was approximately 17 points, and the highest score was 17.19±2.38 obtained by the European Journal of Nuclear Medicine. QUADAS-2 tool showed that all journals had low bias regarding study population. The Journal of Nuclear Medicine had the highest score in terms of index test, reference standard, and time interval. Lack of clarity regarding the index test, reference standard, and time interval was frequently observed in all journals including Clinical Nuclear Medicine, in which 64% of the studies were unclear regarding the index test. Journal of Nuclear Cardiology had the highest number of articles with appropriate reference standard (83.3%), though it had the lowest frequency of reporting disease prevalence (zero reports). All five journals had the same STARD score, while index test, reference standard, and time interval were very unclear according to QUADAS-2 tool. Unfortunately, data were too limited to determine which journal had the lowest risk of bias. In fact, it is the author's responsibility to provide details of research methodology so that the reader can assess the quality of research articles. CONCLUSION Five nuclear medicine journals with the highest impact factor were comparable in terms of STARD score, although they all showed lack of clarity regarding index test, reference standard, and time interval, according to QUADAS-2. The current data were too limited to determine the journal with the lowest bias. Thus, a comprehensive overview of the research methodology of each article is of paramount importance to enable the reader to assess the quality of articles.
Collapse
Affiliation(s)
- Krisana Roysri
- Department of Biostatistics, Faculty of Public Health, Khon Kaen University, Khon Kaen, Thailand,Corresponding author: Krisana Roysri, Surin Hospital, Surin, Thailand. Tel: +66911306582; E-mail: ,
| | | | - Vallop Laopaiboon
- Department of Radiology, Faculty of Medicine, Khon Kaen University, Khon Kaen, Thailand
| | - Jiraporn Khiewyoo
- Department of Biostatistics, Faculty of Public Health, Khon Kaen University, Khon Kaen, Thailand
| |
Collapse
|
15
|
Korevaar DA, van Enst WA, Spijker R, Bossuyt PMM, Hooft L. Reporting quality of diagnostic accuracy studies: a systematic review and meta-analysis of investigations on adherence to STARD. ACTA ACUST UNITED AC 2013; 19:47-54. [PMID: 24368333 DOI: 10.1136/eb-2013-101637] [Citation(s) in RCA: 81] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
BACKGROUND Poor reporting of diagnostic accuracy studies impedes an objective appraisal of the clinical performance of diagnostic tests. The Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement, first published in 2003, aims to improve the reporting quality of such studies. OBJECTIVE To investigate to which extent published diagnostic accuracy studies adhere to the 25-item STARD checklist, whether the reporting quality has improved after STARD's launch and whether there are any factors associated with adherence. STUDY SELECTION We performed a systematic review and searched MEDLINE, EMBASE and the Methodology Register of the Cochrane Library for studies that primarily aimed to examine the reporting quality of articles on diagnostic accuracy studies in humans by evaluating adherence to STARD. Study selection was performed in duplicate; data were extracted by one author and verified by the second author. FINDINGS We included 16 studies, analysing 1496 articles in total. Three studies investigated adherence in a general sample of diagnostic accuracy studies; the others did so in a specific field of research. The overall mean number of items reported varied from 9.1 to 14.3 between 13 evaluations that evaluated all 25 STARD items. Six studies quantitatively compared post-STARD with pre-STARD articles. Combining these results in a random-effects meta-analysis revealed a modest but significant increase in adherence after STARD's introduction (mean difference 1.41 items (95% CI 0.65 to 2.18)). CONCLUSIONS The reporting quality of diagnostic accuracy studies was consistently moderate, at least through halfway the 2000s. Our results suggest a small improvement in the years after the introduction of STARD. Adherence to STARD should be further promoted among researchers, editors and peer reviewers.
Collapse
Affiliation(s)
- Daniël A Korevaar
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics (KEBB), Academic Medical Centre (AMC), University of Amsterdam (UvA), , Amsterdam, The Netherlands
| | | | | | | | | |
Collapse
|
16
|
Ochodo EA, Bossuyt PM. Reporting the Accuracy of Diagnostic Tests: The STARD Initiative 10 Years On. Clin Chem 2013; 59:917-9. [DOI: 10.1373/clinchem.2013.206516] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Eleanor A Ochodo
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Centre, University of Amsterdam, Amsterdam, the Netherlands
| | - Patrick M Bossuyt
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Centre, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
17
|
Ochodo EA, de Haan MC, Reitsma JB, Hooft L, Bossuyt PM, Leeflang MMG. Overinterpretation and Misreporting of Diagnostic Accuracy Studies: Evidence of “Spin”. Radiology 2013; 267:581-8. [DOI: 10.1148/radiol.12120527] [Citation(s) in RCA: 116] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
18
|
Kerr A, Pomeroy VP, Rowe PJ, Dall P, Rafferty D. Measuring movement fluency during the sit-to-walk task. Gait Posture 2013; 37:598-602. [PMID: 23122898 DOI: 10.1016/j.gaitpost.2012.09.026] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2011] [Revised: 09/19/2012] [Accepted: 09/30/2012] [Indexed: 02/02/2023]
Abstract
BACKGROUND Restoring movement fluency is a key focus for physical rehabilitation; it's measurement, however, lacks objectivity. The purpose of this study was to find whether measurable movement fluency variables differed between groups of adults with different movement abilities whilst performing the sit-to-walk (STW) movement. The movement fluency variables were: (1) hesitation during movement (reduction in forward velocity of the centre of mass; CoM), (2) coordination (percentage of temporal overlap of joint rotations) and (3) smoothness (number of inflections in the CoM jerk signal). METHODS Kinematic data previously collected for another study were extracted for three groups: older adults (n=18), older adults at risk of falling (OARF, n=18), and younger adults (n=20). Each subject performed the STW movement freely while a motion analysis system tracked 11 body segments. The fluency variables were derived from the processed kinematic data and tested for group variation using analysis of variance. FINDINGS All three variables showed statistically significant differences among the groups. Hesitation (F=15.11, p<0.001) was greatest in the OARF 47.5% (SD 18.0), compared to older adults 30.3% (SD 15.9) and younger adults 20.8% (SD 11.4). Co-ordination (F=44.88, p<0.001) was lowest for the OARF (6.93%, SD 10.99) compared to both the young (31.21%, SD 5.48) and old (26.24%, SD 5.84). Smoothness (F=35.96, p<0.001) was best in the younger adults, 18.3 (SD 5.2) inflections, compared to the old, 42.5 (SD 11.5) and OARF, 44.25 (SD 7.29). INTERPRETATION Hesitation, co-ordination and smoothness may be valid indicators of movement fluency in adults, with important consequences for research and clinical practice.
Collapse
Affiliation(s)
- A Kerr
- Department of Biomedical Engineering, University of Strathclyde, 106 Rottenrow, Glasgow G4 0NW, United Kingdom.
| | | | | | | | | |
Collapse
|
19
|
Rodger M, Ramsay T, Fergusson D. Diagnostic randomized controlled trials: the final frontier. Trials 2012; 13:137. [PMID: 22897974 PMCID: PMC3495679 DOI: 10.1186/1745-6215-13-137] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2011] [Accepted: 06/14/2012] [Indexed: 11/10/2022] Open
Abstract
Clinicians, patients, governments, third-party payers, and the public take for granted that diagnostic tests are accurate, safe and effective. However, we may be seriously misled if we are relying on robust study design to ensure accurate, safe, and effective diagnostic tests. Properly conducted, randomized controlled trials are the gold standard for assessing the effectiveness and safety of interventions, yet are rarely conducted in the assessment of diagnostic tests. Instead, diagnostic cohort studies are commonly performed to assess the characteristics of a diagnostic test including sensitivity and specificity. While diagnostic cohort studies can inform us about the relative accuracy of an experimental diagnostic intervention compared to a reference standard, they do not inform us about whether the differences in accuracy are clinically important, or the degree of clinical importance (in other words, the impact on patient outcomes). In this commentary we provide the advantages of the diagnostic randomized controlled trial and suggest a greater awareness and uptake in their conduct. Doing so will better ensure that patients are offered diagnostic procedures that will make a clinical difference.
Collapse
Affiliation(s)
- Marc Rodger
- Thrombosis Program, Division of Hematology, Department of Medicine, University of Ottawa, ON, Canada
| | | | | |
Collapse
|
20
|
Shaghaghi A, Matlabi H. Reporting of health promotion research: addressing the quality gaps in iran. Health Promot Perspect 2012; 2:48-52. [PMID: 24688917 PMCID: PMC3963651 DOI: 10.5681/hpp.2012.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2012] [Accepted: 06/06/2012] [Indexed: 11/11/2022] Open
Abstract
Quality of health behavior research determines usefulness of the findings for application. The authors individually scrutinized quality of a representative sample of abstracts (n=315) submitted to the 1st International and 4th National Congress on Health Education and Promotion, held in Tabriz, Iran on 16-19 May, 2011. Among the assessed abstracts, introduction section had the standard format in 18.1% (CI: 14.2-22.7%), sampling method and sample size were concurrently explained in 56.3% (CI: 50.3-62.1%), and the data in 40.6% (CI: 35.4-46.1%) were insufficient to support the conclusion section. The observed heterogeneity in the quality of Iranian research may reflect gaps in research methodology education. Revision in the current research performance is recommended to ensure a more stringent national research output.
Collapse
Affiliation(s)
- Abdolreza Shaghaghi
- The Medical Education Research Centre, R & D Campus; Department of Health Education and Promotion, Faculty of Health and Nutrition, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Hossein Matlabi
- The Medical Education Research Centre, R & D Campus; Department of Health Education and Promotion, Faculty of Health and Nutrition, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
21
|
Colditz GA, Crowley J. DNA cytometry testing for cervical cancer screening: approaches and reporting standards for new technologies. Clin Cancer Res 2011; 17:6971-2. [PMID: 21940755 DOI: 10.1158/1078-0432.ccr-11-1862] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Evaluation of new technologies requires rigorous methods to provide unbiased estimates of the performance and so inform future clinical practice. We review evidence on DNA cytometry reported earlier in this journal and point to the standards for reporting of diagnostic accuracy as a metric against which this article can be evaluated. The cross-sectional nature of the data and incomplete reporting limit the clinical utility of the study. With application of improved reporting standards for diagnostic tests and improved design and evaluation of new technologies for screening, we may better inform practices to improve clinical outcomes and population health.
Collapse
Affiliation(s)
- Graham A Colditz
- Division of Public Health Sciences, Department of Surgery, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | |
Collapse
|