1
|
Venkatesan S, Kalvapudi S, Muppidi V, Ajith K, Dutt A, Madhugiri VS. A survey of surveys: an evaluation of the quality of published surveys in neurosurgery. Acta Neurochir (Wien) 2024; 166:150. [PMID: 38528271 DOI: 10.1007/s00701-024-06042-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 03/15/2024] [Indexed: 03/27/2024]
Abstract
PURPOSE Surveys generate valuable data in epidemiologic and qualitative clinical research. The quality of a survey depends on its design, the number of responses it receives, and the reporting of the results. In this study, we aimed to assess the quality of surveys in neurosurgery. METHODS Neurosurgical surveys published between 2000 and 2020 (inclusive) were identified from PubMed. Various datapoints regarding the surveys were collated. The number of citations received by the papers was determined from Google Scholar. A 6-dimensional quality assessment tool was applied to the surveys. Parameters from this tool were combined with the number of responses received to create the survey quality score (SQS). RESULTS A total of 618 surveys were included for analysis. The target sample size correlated with the number of responses received. The response rate correlated positively with the target sample size and the number of reminders sent and negatively with the number of questions in the survey. The median number of authors on neurosurgery survey papers was 6. The number of authors correlated with the SQS and the number of citations received by published survey papers. The median normalized SQS for neurosurgical surveys was 65%. The nSQS independently predicted the citations received per year by surveys. CONCLUSIONS The modifiable factors that correlated with improvements in survey design were optimizing the number of questions, maximizing the target sample size, and incorporating reminders in the survey design. Increasing the number of contributing authors led to improvements in survey quality. The SQS was validated and correlated well with the citations received by surveys.
Collapse
Affiliation(s)
| | - Sukumar Kalvapudi
- Division of Thoracic Surgery, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Varun Muppidi
- Department of Neurosurgery, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Karthik Ajith
- Department of Neurosurgery, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Akshat Dutt
- Department of General Surgery, All India Institute of Medical Sciences, Jodhpur, Rajasthan, India
| | - Venkatesh Shankar Madhugiri
- Gamma Knife Center, Department of Radiation Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA.
| |
Collapse
|
2
|
Pilotto S, Gencarelli J, Bova S, Gerosa L, Baroncini D, Olivotto S, Alfei E, Zaffaroni M, Suppiej A, Cocco E, Trojano M, Amato MP, D'Alfonso S, Martinelli-Boneschi F, Waubant E, Ghezzi A, Bergamaschi R, Pugliatti M. Etiological research in pediatric multiple sclerosis: A tool to assess environmental exposures (PEDiatric Italian Genetic and enviRonment ExposurE Questionnaire). Mult Scler J Exp Transl Clin 2021; 7:20552173211059048. [PMID: 34868629 PMCID: PMC8640303 DOI: 10.1177/20552173211059048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 10/22/2021] [Indexed: 11/16/2022] Open
Abstract
Background The etiology of pediatric-onset multiple sclerosis is unknown although putative genetic and environmental factors appear to be involved. Among children multiple sclerosis onset occurs closer to the susceptibility window thank in adults and the exposure to etiological environmental factors is more informative. An Italian multicentre case-control study (the PEDiatric Italian Genetic and enviRonment ExposurE, PEDIGREE study) was designed to investigate environmental exposures in pediatric-onset multiple sclerosis and their interaction with genetics. Objectives To collect evidence on exposures to environmental risk factors in pediatric-onset multiple sclerosis, a questionnaire was developed for the Italian population (PEDIGREE Questionnaire) and is presented. Methods PEDIGREE Questionnaire develops from an existing tool used in case-control studies on pediatric-onset multiple sclerosis in US Americans, and was translated, adapted and tested for the contents perceived relevance, acceptability, feasibility and reliability in a population of Italian pediatric subjects and their parents recruited from clinics and general population. Results PEDIGREE Questionnaire contents were overall deemed relevant by the study population, acceptable for 100% participants and feasible for at least 98%. PEDIGREE Questionnaire degree of reliability ranged 56% to 72%. Conclusion PEDIGREE Questionnaire proves to be an efficient tool to assess environmental exposures in the Italian pediatric population. We encourage the dissemination of population-specific questionnaires and shared methodology to optimize efforts in MS etiological research.
Collapse
Affiliation(s)
- Silvy Pilotto
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara, Italy
| | - Jessica Gencarelli
- Department of Medical Sciences - Pediatric Section, University of Ferrara, Ferrara, Italy
| | - Stefania Bova
- Pediatric Neurology Unit, V. Buzzi Children's Hospital, Milan, Italy
| | - Leonardo Gerosa
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara, Italy
| | | | | | - Enrico Alfei
- Pediatric Neurology Unit, V. Buzzi Children's Hospital, Milan, Italy
| | - Mauro Zaffaroni
- Multiple Sclerosis Centre, ASST Valle Olona, Gallarate, Italy
| | - Agnese Suppiej
- Department of Medical Sciences - Pediatric Section, University of Ferrara, Ferrara, Italy
| | - Eleonora Cocco
- Department Medical Science and Public Health, University of Cagliari, Italy
| | - Maria Trojano
- Department of Basic Medical Sciences, Neuroscience and Sense Organs, University of Bari, Italy
| | | | | | | | - Emmanuelle Waubant
- Department of Neurology, UC San Francisco, San Francisco, California, USA
| | - Angelo Ghezzi
- Multiple Sclerosis Centre, ASST Valle Olona, Gallarate, Italy
| | | | - Maura Pugliatti
- Department of Neuroscience and Rehabilitation, Interdepartmental Research Center for the Study of Multiple Sclerosis and Inflammatory and Degenerative Diseases of the Nervous System, University of Ferrara, Ferrara, Italy
| |
Collapse
|
3
|
Guidolin K, Wexner SD, Jung F, Khan S, Deng SX, Kirubarajan A, Quereshy F, Chadi S. Strengths and weaknesses in the methodology of survey-based research in surgery: A call for standardization. Surgery 2021; 170:493-498. [PMID: 33608150 DOI: 10.1016/j.surg.2021.01.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 01/03/2021] [Accepted: 01/06/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Survey-based studies are often the basis of policy changes; however, the methodologic quality of such research can be questionable. Methodologic reviews of survey-based studies have been conducted in other medical fields, but the surgical literature has not been assessed. METHODS All citations published in 9 major surgical journals from 2002 to 2019 were screened for studies administering surveys to health care professionals. Descriptive and methodologic data were collected by 2 reviewers who also assessed the transparency and quality of the methodology. Agreement between reviewers was assessed using a weighted κ-statistic. Survey quality metrics were measured, descriptive statistics were calculated, and regression analysis was used to assess the association between subjective overall study quality and objective quality metrics. RESULTS We included 271 articles in our analysis; the weighted-κ for reviewer quality assessment was 0.69 and for transparency assessment was 0.71. Deficiencies were identified in questionnaire development methodology and reporting, in which the median number of developmental steps reported was 1 (of 8) and in the reporting of incomplete/missing data where 63% of studies failed to report how incomplete questionnaires were managed; 70% of studies failed to report missing data. Overall subjective quality was positively associated with objective quality metrics. CONCLUSION The deficiencies identified in the surgical literature highlight the need for improvement in the conduct and reporting of survey-based research, both in the surgical literature and more broadly. Adoption of a standardized reporting guideline for survey-based research may ameliorate the deficiencies identified by this study and other investigations.
Collapse
Affiliation(s)
- Keegan Guidolin
- Faculty of Medicine, University of Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Canada; Princess Margaret Cancer Centre, Toronto, Canada. https://twitter.com/keeganguidolin
| | - Steven D Wexner
- Digestive Disease Center, Cleveland Clinic Florida, Weston, FL. https://twitter.com/SWexner
| | - Flora Jung
- Faculty of Medicine, University of Toronto, Canada. https://twitter.com/FloraJung95
| | - Shawn Khan
- Faculty of Medicine, University of Toronto, Canada. https://twitter.com/_ShawnKhan
| | | | | | - Fayez Quereshy
- Faculty of Medicine, University of Toronto, Canada; Department of Surgery, University Health Network, Toronto, Canada; Princess Margaret Cancer Centre, Toronto, Canada. https://twitter.com/QuereshyMD
| | - Sami Chadi
- Faculty of Medicine, University of Toronto, Canada; Department of Surgery, University Health Network, Toronto, Canada; Princess Margaret Cancer Centre, Toronto, Canada.
| |
Collapse
|
4
|
Pagano MB, Dunbar NM, Stanworth SJ. How do we design and report a high-quality survey? Transfusion 2020; 60:2178-2184. [PMID: 32643205 DOI: 10.1111/trf.15861] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 04/26/2020] [Accepted: 04/27/2020] [Indexed: 11/28/2022]
Abstract
Every day, new surveys are planned, distributed, or reported by health care professionals. Surveys are an inexpensive and convenient research tool used with increasing frequency as an approach to gather and collate information on attitudes and behaviors for a specific topic. However, surveys can squander the valuable time of respondents who may derive little, if any, benefit from participation. Similar to any other research methodology, a careful design is needed to avoid introducing bias and to obtain meaningful information. A recent study evaluating the quality of surveys addressing clinical topics in transfusion medicine (TM) identified common deficiencies in the quality and design, including the failure to report validity and reliability, to address nonresponse error, to report funding and ethics/consent considerations, and to discuss the generalizability of results. Instructions to authors for reporting survey results are lacking in most journals. Inadequate survey design, analysis, and reporting can prevent accurate data collection and compromise the interpretation of the results, which is of critical relevance considering the high citation rates for some of these surveys. Further, survey results might be used to inform policies when no higher level of evidence is available. In this article, the authors seek to provide practical recommendations for designing high-quality surveys based on personal experience and published literature and to address frequently missing key elements in survey-based studies related to clinical TM.
Collapse
Affiliation(s)
- Monica B Pagano
- Division of Transfusion Medicine, Department of Laboratory Medicine, University of Washington, Seattle, Washington, USA
| | - Nancy M Dunbar
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Simon J Stanworth
- Transfusion Medicine, NHS Blood and Transplant, Oxford, UK.,Department of Haematology, Oxford University Hospitals, NHS Foundation Trust, Oxford, UK.,Radcliffe Department of Medicine, University of Oxford and NIHR Oxford Biomedical Research Centre (Haematology), Oxford, UK
| | | |
Collapse
|
5
|
Tran EM, Tran MM, Clark MA, Scott IU, Margo CE, Cosenza C, Johnson TP, Greenberg PB. Assessing the Quality of Published Surveys in Ophthalmology. Ophthalmic Epidemiol 2020; 27:339-343. [PMID: 32248737 DOI: 10.1080/09286586.2020.1746359] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
PURPOSE Surveys are an important research modality in ophthalmology, but their quality has not been rigorously assessed. This study evaluated the quality of published ophthalmic surveys. METHODS Three survey methodologists, three senior ophthalmologists, and two research assistants developed a survey evaluation instrument focused on survey development and testing; sampling frame; response bias; results reporting; and ethics. Two investigators used the instrument to assess the quality of all ophthalmic surveys that were published between January 1, 2018 and December 31, 2018; indexed in MEDLINE/PubMed, Embase, and/or Web of Science; contained the search terms "ophthalmology" and "survey" or "questionnaire" in the title and/or abstract; and were available in English. RESULTS The search identified 626 articles; 60 met the eligibility criteria and were assessed with the survey evaluation instrument. Most surveys (93%; 56/60) defined the study population; 48% (29/60) described how question items were chosen; 30% (18/60) provided the survey for review or described the questions in sufficient detail; 30% (18/60) were pre-tested or piloted; 25% (15/60) reported validity/clinical sensibility testing; 15% (9/60) described techniques used to assess non-response bias; and 63% (38/60) documented review by an institutional review board (IRB). CONCLUSION The quality of published ophthalmic surveys can be improved by focusing on survey development, pilot testing, non-response bias and institutional review board review. The survey evaluation instrument can help guide researchers in conducting quality ophthalmic surveys and assist journal editors in evaluating surveys submitted for publication.
Collapse
Affiliation(s)
- Elaine M Tran
- Division of Ophthalmology, Warren Alpert Medical School, Brown University , Providence, Rhode Island, USA.,Section of Ophthalmology, Providence Veterans Affairs Medical Center , Providence, Rhode Island, USA
| | - Megan M Tran
- Division of Ophthalmology, Warren Alpert Medical School, Brown University , Providence, Rhode Island, USA.,Section of Ophthalmology, Providence Veterans Affairs Medical Center , Providence, Rhode Island, USA
| | - Melissa A Clark
- Department of Health Services, School of Public Health, Brown University , Providence, Rhode Island, USA
| | - Ingrid U Scott
- Departments of Ophthalmology and Public Health Sciences, Penn State College of Medicine , Hershey, Pennsylvania, USA
| | - Curtis E Margo
- Departments of Ophthalmology and Pathology and Cell Biology, Morsani College of Medicine, University of South Florida , Tampa, Florida, USA
| | - Carol Cosenza
- Center for Survey Research, University of Massachusetts Boston , Boston, Massachusetts, USA
| | - Timothy P Johnson
- Survey Research Laboratory, University of Illinois at Chicago , Chicago, Illinois, USA
| | - Paul B Greenberg
- Division of Ophthalmology, Warren Alpert Medical School, Brown University , Providence, Rhode Island, USA.,Section of Ophthalmology, Providence Veterans Affairs Medical Center , Providence, Rhode Island, USA
| |
Collapse
|
6
|
Pagano MB, Dunbar NM, Tinmouth A, Apelseth TO, Lozano M, Cohn CS, Stanworth SJ. A methodological review of the quality of reporting of surveys in transfusion medicine. Transfusion 2018; 58:2720-2727. [DOI: 10.1111/trf.14937] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 07/06/2018] [Accepted: 07/06/2018] [Indexed: 11/28/2022]
Affiliation(s)
- Monica B. Pagano
- Department of Laboratory Medicine, Division of Transfusion Medicine; University of Washington; Seattle Washington
| | - Nancy M. Dunbar
- Department of Pathology and Laboratory Medicine; Dartmouth-Hitchcock Medical Center; Lebanon New Hampshire
| | - Alan Tinmouth
- Departments of Medicine and Laboratory Medicine & Pathology; University of Ottawa, University of Ottawa Centre for Transfusion Research, Ottawa Hospital Research Institute; Ottawa Ontario Canada
| | - Torunn Oveland Apelseth
- Laboratory of Clinical Biochemistry and Department of Immunology and Transfusion Medicine; Haukeland University Hospital; Bergen Norway
| | - Miguel Lozano
- Department of Hemotherapy and Hemostasis; University Clinic Hospital, IDIBAPS, University of Barcelona; Barcelona Spain
| | - Claudia S. Cohn
- Department of Laboratory Medicine and Pathology; University of Minnesota; Minneapolis Minnesota
| | - Simon J. Stanworth
- Department of Haematology; Oxford University Hospitals
- NHS Blood and Transplant, John Radcliffe Hospital
- Radcliffe Department of Medicine; University of Oxford; Oxford UK
| | | |
Collapse
|
7
|
Langbecker D, Caffery LJ, Gillespie N, Smith AC. Using survey methods in telehealth research: A practical guide. J Telemed Telecare 2017; 23:770-779. [DOI: 10.1177/1357633x17721814] [Citation(s) in RCA: 60] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Surveys are a common method for assessing patient and clinician perceptions, attitudes and outcomes of telehealth. However, inadequacies in both the conduct and reporting of survey studies are common in telehealth research. This article provides clinicians and researchers with practical guidance on the appropriate selection, use and reporting of survey tools for telehealth research. We identify common survey outcomes and instruments used in telehealth research, and methods to assess the validity and psychometric properties of survey tools. Enhancing the quality and reporting of telehealth research is important to improve our understanding of which telehealth-supported models of care improve outcomes and for which patient groups.
Collapse
Affiliation(s)
- Danette Langbecker
- Centre for Online Health, The University of Queensland, Brisbane, Australia
| | - Liam J Caffery
- Centre for Online Health, The University of Queensland, Brisbane, Australia
| | - Nicole Gillespie
- UQ Business School, The University of Queensland, Brisbane, Australia
| | - Anthony C Smith
- Centre for Online Health, The University of Queensland, Brisbane, Australia
| |
Collapse
|
8
|
Altin SV, Finke I, Kautz-Freimuth S, Stock S. The evolution of health literacy assessment tools: a systematic review. BMC Public Health 2014; 14:1207. [PMID: 25418011 PMCID: PMC4289240 DOI: 10.1186/1471-2458-14-1207] [Citation(s) in RCA: 182] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2014] [Accepted: 11/12/2014] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Health literacy (HL) is seen as an increasingly relevant issue for global public health and requires a reliable and comprehensive operationalization. By now, there is limited evidence on how the development of tools measuring HL proceeded in recent years and if scholars considered existing methodological guidance when developing an instrument. METHODS We performed a systematic review of generic measurement tools developed to assess HL by searching PubMed, ERIC, CINAHL and Web of Knowledge (2009 forward). Two reviewers independently reviewed abstracts/ full text articles for inclusion according to predefined criteria. Additionally we conducted a reporting quality appraisal according to the survey reporting guideline SURGE. RESULTS We identified 17 articles reporting on the development and validation of 17 instruments measuring health literacy. More than two thirds of all instruments are based on a multidimensional construct of health literacy. Moreover, there is a trend towards a mixed measurement (self-report and direct test) of health literacy with 41% of instruments applying it, though results strongly indicate a weakness of coherence between the underlying constructs measured. Overall, almost every third instrument is based on assessment formats modeled on already existing functional literacy screeners such as the REALM or the TOFHLA and 30% of the included articles do not report on significant reporting features specified in the SURGE guideline. CONCLUSIONS Scholars recently developing instruments that measure health literacy mainly comply with recommendations of the academic circle by applying multidimensional constructs and mixing up measurement approaches to capture health literacy comprehensively. Nonetheless, there is still a dependence on assessment formats, rooted in functional literacy measurement contradicting the widespread call for new instruments. All things considered, there is no clear "consensus" on HL measurement but a convergence to more comprehensive tools. Giving attention to this finding can help to offer direction towards the development of comparable and reliable health literacy assessment tools that effectively respond to the informational needs of populations.
Collapse
Affiliation(s)
- Sibel Vildan Altin
- Institute for Health Economics and Clinical Epidemiology, University Hospital of Cologne, Gleuelerstr 176-178 50935, Cologne.
| | | | | | | |
Collapse
|
9
|
Li AHT, Thomas SM, Farag A, Duffett M, Garg AX, Naylor KL. Quality of survey reporting in nephrology journals: a methodologic review. Clin J Am Soc Nephrol 2014; 9:2089-94. [PMID: 25267553 DOI: 10.2215/cjn.02130214] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
BACKGROUND AND OBJECTIVES Survey research is an important research method used to determine individuals' attitudes, knowledge, and behaviors; however, as with other research methods, inadequate reporting threatens the validity of results. This study aimed to describe the quality of reporting of surveys published between 2001 and 2011 in the field of nephrology. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS The top nephrology journals were systematically reviewed (2001-2011: American Journal of Kidney Diseases, Nephrology Dialysis Transplantation, and Kidney International; 2006-2011: Clinical Journal of the American Society of Nephrology) for studies whose primary objective was to collect and report survey results. Included were nephrology journals with a heavy focus on clinical research and high impact factors. All titles and abstracts were screened in duplicate. Surveys were excluded if they were part of a multimethod study, evaluated only psychometric characteristics, or used semi-structured interviews. Information was collected on survey and respondent characteristics, questionnaire development (e.g., pilot testing), psychometric characteristics (e.g., validity and reliability), survey methods used to optimize response rate (e.g., system of multiple contacts), and response rate. RESULTS After a screening of 19,970 citations, 216 full-text articles were reviewed and 102 surveys were included. Approximately 85% of studies reported a response rate. Almost half of studies (46%) discussed how they developed their questionnaire and only a quarter of studies (28%) mentioned the validity or reliability of the questionnaire. The only characteristic that improved over the years was the proportion of articles reporting missing data (2001-2004: 46.4%; 2005-2008: 61.9%; and 2009-2011: 84.8%; respectively) (P<0.01). CONCLUSIONS The quality of survey reporting in nephrology journals remains suboptimal. In particular, reporting of the validity and reliability of the questionnaire must be improved. Guidelines to improve survey reporting and increase transparency are clearly needed.
Collapse
Affiliation(s)
- Alvin Ho-Ting Li
- Division of Nephrology, Department of Medicine, and Department of Epidemiology & Biostatistics, Western University, London, Ontario, Canada
| | - Sonia M Thomas
- Division of Nephrology, Department of Medicine, and Department of Epidemiology & Biostatistics, Western University, London, Ontario, Canada
| | | | - Mark Duffett
- Departments of Pediatrics and Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada; and
| | - Amit X Garg
- Division of Nephrology, Department of Medicine, and Department of Epidemiology & Biostatistics, Western University, London, Ontario, Canada; Institute for Clinical Evaluative Sciences, Toronto, Ontario, Canada
| | - Kyla L Naylor
- Division of Nephrology, Department of Medicine, and Department of Epidemiology & Biostatistics, Western University, London, Ontario, Canada;
| |
Collapse
|
10
|
Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, Michie S, Moher D, Wager E. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014; 383:267-76. [PMID: 24411647 DOI: 10.1016/s0140-6736(13)62228-x] [Citation(s) in RCA: 803] [Impact Index Per Article: 80.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Research publication can both communicate and miscommunicate. Unless research is adequately reported, the time and resources invested in the conduct of research is wasted. Reporting guidelines such as CONSORT, STARD, PRISMA, and ARRIVE aim to improve the quality of research reports, but all are much less adopted and adhered to than they should be. Adequate reports of research should clearly describe which questions were addressed and why, what was done, what was shown, and what the findings mean. However, substantial failures occur in each of these elements. For example, studies of published trial reports showed that the poor description of interventions meant that 40-89% were non-replicable; comparisons of protocols with publications showed that most studies had at least one primary outcome changed, introduced, or omitted; and investigators of new trials rarely set their findings in the context of a systematic review, and cited a very small and biased selection of previous relevant trials. Although best documented in reports of controlled trials, inadequate reporting occurs in all types of studies-animal and other preclinical studies, diagnostic studies, epidemiological studies, clinical prediction research, surveys, and qualitative studies. In this report, and in the Series more generally, we point to a waste at all stages in medical research. Although a more nuanced understanding of the complex systems involved in the conduct, writing, and publication of research is desirable, some immediate action can be taken to improve the reporting of research. Evidence for some recommendations is clear: change the current system of research rewards and regulations to encourage better and more complete reporting, and fund the development and maintenance of infrastructure to support better reporting, linkage, and archiving of all elements of research. However, the high amount of waste also warrants future investment in the monitoring of and research into reporting of research, and active implementation of the findings to ensure that research reports better address the needs of the range of research users.
Collapse
Affiliation(s)
- Paul Glasziou
- Centre for Research in Evidence Based Practice, Bond University, Robina, QLD, Australia.
| | - Douglas G Altman
- Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| | - Patrick Bossuyt
- Department of Clinical Epidemiology and Biostatistics, Academic Medical Center, University of Amsterdam, Amsterdam, Netherlands
| | | | - Mike Clarke
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Steven Julious
- Medical Statistics Group, University of Sheffield, Sheffield, UK
| | - Susan Michie
- Centre for Outcomes Research and Effectiveness, Department of Psychology, University College London, London, UK
| | - David Moher
- Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | | |
Collapse
|
11
|
Xing W, Hejblum G, Valleron AJ. EpiBasket: how e-commerce tools can improve epidemiological preparedness. EMERGING HEALTH THREATS JOURNAL 2013; 6:19748. [PMID: 24183326 PMCID: PMC3816197 DOI: 10.3402/ehtj.v6i0.19748] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2012] [Revised: 07/01/2013] [Indexed: 11/14/2022]
Abstract
Background Should an emerging infectious disease outbreak or an environmental disaster occur, the collection of epidemiological data must start as soon as possible after the event's onset. Questionnaires are usually built de novo for each event, resulting in substantially delayed epidemiological responses that are detrimental to the understanding and control of the event considered. Moreover, the public health and/or academic institution databases constructed with responses to different questionnaires are usually difficult to merge, impairing necessary collaborations. We aimed to show that e-commerce concepts and software tools can be readily adapted to enable rapid collection of data after an infectious disease outbreak or environmental disaster. Here, the ‘customers’ are the epidemiologists, who fill their shopping ‘baskets’ with standardised questions. Methods For each epidemiological field, a catalogue of questions is constituted by identifying the relevant variables based on a review of the published literature on similar circumstances. Each question is tagged with information on its source papers. Epidemiologists can then tailor their own questionnaires by choosing appropriate questions from this catalogue. The software immediately provides them with ready-to-use forms and online questionnaires. All databases constituted by the different EpiBasket users are interoperable, because the corresponding questionnaires are derived from the same corpus of questions. Results A proof-of-concept prototype was developed for Knowledge, Attitudes and Practice (KAP) surveys, which is one of the fields of the epidemiological investigation frequently explored during, or after, an outbreak or environmental disaster. The catalogue of questions was initiated from a review of the KAP studies conducted during or after the 2003 severe acute respiratory syndrome epidemic. Conclusion Rapid collection of standardised data after an outbreak or environmental disaster can be facilitated by transposing the e-commerce paradigm to epidemiology, taking advantage of the powerful software tools already available.
Collapse
Affiliation(s)
- Weijia Xing
- Institut National de la Santé et de la Recherche Médicale, Paris, France; Division of Infectious Disease, Key Laboratory of Surveillance and Early-warning on Infectious Disease, Chinese Center for Disease Control and Prevention, Beijing, China
| | | | | |
Collapse
|
12
|
Pugliatti M, Casetta I, Drulovic J, Granieri E, Holmøy T, Kampman MT, Landtblom AM, Lauer K, Myhr KM, Parpinel M, Pekmezovic T, Riise T, Zhu B, Wolfson C. A questionnaire for multinational case-control studies of environmental risk factors in multiple sclerosis (EnvIMS-Q). Acta Neurol Scand 2012:43-50. [PMID: 23278656 DOI: 10.1111/ane.12032] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/24/2012] [Indexed: 01/13/2023]
Abstract
OBJECTIVES The increasing incidence of multiple sclerosis (MS) worldwide, especially in women, points to the crucial role of environmental and lifestyle risk factors in determining the disease occurrence. An international multicentre case-control study of Environmental Risk Factors In Multiple Sclerosis (EnvIMS) has been launched in Norway, Sweden, Italy, Serbia and Canada, aimed to examine MS environmental risk factors in a large study population and disclose reciprocal interactions. To ensure equivalent methodology in detecting age-related past exposures in individuals with and without MS across the study sites, a new questionnaire (EnvIMS-Q) is presented. MATERIALS AND METHODS EnvIMS-Q builds on previously developed guidelines for epidemiological studies in MS and is a 6-page self-administered postal questionnaire. Participants are de-identified through the use of a numerical code. Its content is identical for cases and controls including 'core' and population-specific questions as proxies for vitamin D exposure (sun exposure, dietary habits and supplementation), childhood infections (including infectious mononucleosis) and cigarette smoking. Information on possible confounders or effect modifiers is also obtained. EnvIMS-Q was initially drafted in English and subsequently translated into Italian, Serbian, Norwegian, Swedish and French-Canadian. EnvIMS-Q has been tested for acceptability, feasibility and reliability. RESULTS AND CONCLUSIONS EnvIMS-Q has shown cross-cultural feasibility, acceptability and reliability in both patients with MS and healthy subjects from all sites. EnvIMS-Q is an efficient tool to ensure proper assessment of age-specific exposure to environmental factors in large multinational population-based case-control studies of MS risk factors.
Collapse
Affiliation(s)
| | - I. Casetta
- Department of Biomedical and Surgical Sciences; Section of Clinical Neurology; University of Ferrara; Ferrara; Italy
| | - J. Drulovic
- Clinic of Neurology; Faculty of Medicine; University of Belgrade; Belgrade; Serbia
| | - E. Granieri
- Department of Biomedical and Surgical Sciences; Section of Clinical Neurology; University of Ferrara; Ferrara; Italy
| | | | | | - A.-M. Landtblom
- Division of Neurology; Department of Clinical and Experimental Medicine; UHL, County Council; Linköping University; Linköping; Sweden
| | | | | | - M. Parpinel
- Unit of Hygiene and Epidemiology; Department of Medical and Biological Sciences; University of Udine; Udine; Italy
| | - T. Pekmezovic
- Institute of Epidemiology; Faculty of Medicine; University of Belgrade; Belgrade; Serbia
| | - T. Riise
- Department of Public Health and Primary Health Care; University of Bergen; Bergen; Norway
| | - B. Zhu
- Research Institute of the McGill University Health Centre; Montreal; QC; Canada
| | | |
Collapse
|
13
|
Schroter S, Glasziou P, Heneghan C. Quality of descriptions of treatments: a review of published randomised controlled trials. BMJ Open 2012; 2:e001978. [PMID: 23180392 PMCID: PMC3533061 DOI: 10.1136/bmjopen-2012-001978] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
OBJECTIVES To be useable in clinical practise, treatments studied in trials must provide sufficient information to enable clinicians and researchers to replicate. We sought to assess the completeness of treatment descriptions in published randomised controlled trials (RCTs) using a checklist and to determine the extent to which peer reviewers and editors comment on the quality of reporting of treatments. DESIGN A cross-sectional study. SETTING Trials published in the BMJ, a general medical journal. PARTICIPANTS Fifty-one trials published in the BMJ were independently evaluated by two raters using a checklist. Reviewers' and editors' comments were also assessed for statements on treatment descriptions. PRIMARY AND SECONDARY OUTCOME MEASURES Proportion of trials rated as replicable (primary outcome). RESULTS For 57% (29/51) of the papers, published treatment descriptions were not considered sufficient to allow replication. Most poorly described aspects were the actual procedures involved including the sequencing of the technique (what happened and when) and the physical or informational materials used (eg, training materials): 53% and 43% not clear, respectively. For a third of treatments, the dose/duration of individual sessions was not clear and for a quarter the schedule (interval, frequency, duration or timing) was not clear. Although the majority of problems were not picked up by reviewers and editors, when they were detected only about two-thirds were fixed before publication. CONCLUSIONS Journals wanting to publish the research of use to practising healthcare professionals need to pay more attention to descriptions of treatments. Our checklist, may be useful for reviewers, and editors and could help ensure that important details of treatments are provided before papers are in the public domain.
Collapse
Affiliation(s)
- Sara Schroter
- BMJ Editorial, London, UK
- Department of Primary Care, Centre for Evidence Based Medicine, Oxford University, Oxford, UK
| | - Paul Glasziou
- Department of Primary Care, Centre for Evidence Based Medicine, Oxford University, Oxford, UK
| | - Carl Heneghan
- Department of Primary Care, Centre for Evidence Based Medicine, Oxford University, Oxford, UK
| |
Collapse
|
14
|
Bennett C, Khangura S, Brehaut JC, Graham ID, Moher D, Potter BK, Grimshaw JM. Reporting guidelines for survey research: an analysis of published guidance and reporting practices. PLoS Med 2010; 8:e1001069. [PMID: 21829330 PMCID: PMC3149080 DOI: 10.1371/journal.pmed.1001069] [Citation(s) in RCA: 234] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2010] [Accepted: 06/17/2011] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research. METHODS AND FINDINGS We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%). CONCLUSIONS There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.
Collapse
Affiliation(s)
- Carol Bennett
- Ottawa Hospital Research Institute, Clinical Epidemiology Program, Ottawa, Canada.
| | | | | | | | | | | | | |
Collapse
|
15
|
Nascimento MID, Monteiro GTR. [Characteristics of access to Pap smear: three methodological stages in the adaptation of a data collection instrument]. CAD SAUDE PUBLICA 2010; 26:1096-108. [PMID: 20657975 DOI: 10.1590/s0102-311x2010000600004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2009] [Accepted: 04/16/2010] [Indexed: 11/21/2022] Open
Abstract
The article describes the initial steps in the Portuguese-language adaptation of an instrument to measure characteristics of access to cervical cancer prevention. A universalist approach was adopted to assess conceptual, item, and semantic equivalence. The methodology included a literature review and participation by both experts and women representing the general population. Conceptual and item equivalence was established with participation by experts. Semantic equivalence was analyzed in five stages. Two translations into Portuguese and two back-translations into English were performed independently and evaluated by a third researcher from the public health field. The back-translations showed good referential and connotative similarity to the original, and a consensus version was formulated. Twenty-eight women participated in the pretest, eight of whom in the focus group format. The version proved comprehensible, underwent some modifications, and is ready for the complementary stages in the cross-cultural adaptation process.
Collapse
|
16
|
|
17
|
Burns KEA, Duffett M, Kho ME, Meade MO, Adhikari NKJ, Sinuff T, Cook DJ. A guide for the design and conduct of self-administered surveys of clinicians. CMAJ 2008; 179:245-52. [PMID: 18663204 DOI: 10.1503/cmaj.080372] [Citation(s) in RCA: 868] [Impact Index Per Article: 54.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Affiliation(s)
- Karen E A Burns
- Interdepartmental Division of Critical Care, University of Toronto, and Keenan Research Centre and the Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Ont
| | | | | | | | | | | | | | | |
Collapse
|
18
|
Neilson HK, Robson PJ, Friedenreich CM, Csizmadi I. Estimating activity energy expenditure: how valid are physical activity questionnaires? Am J Clin Nutr 2008; 87:279-91. [PMID: 18258615 DOI: 10.1093/ajcn/87.2.279] [Citation(s) in RCA: 146] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Activity energy expenditure (AEE) is the modifiable component of total energy expenditure (TEE) derived from all activities, both volitional and nonvolitional. Because AEE may affect health, there is interest in its estimation in free-living people. Physical activity questionnaires (PAQs) could be a feasible approach to AEE estimation in large populations, but it is unclear whether or not any PAQ is valid for this purpose. Our aim was to explore the validity of existing PAQs for estimating usual AEE in adults, using doubly labeled water (DLW) as a criterion measure. We reviewed 20 publications that described PAQ-to-DLW comparisons, summarized study design factors, and appraised criterion validity using mean differences (AEE(PAQ) - AEE(DLW), or TEE(PAQ) - TEE(DLW)), 95% limits of agreement, and correlation coefficients (AEE(PAQ) versus AEE(DLW) or TEE(PAQ) versus TEE(DLW)). Only 2 of 23 PAQs assessed most types of activity over the past year and indicated acceptable criterion validity, with mean differences (TEE(PAQ) - TEE(DLW)) of 10% and 2% and correlation coefficients of 0.62 and 0.63, respectively. At the group level, neither overreporting nor underreporting was more prevalent across studies. We speculate that, aside from reporting error, discrepancies between PAQ and DLW estimates may be partly attributable to 1) PAQs not including key activities related to AEE, 2) PAQs and DLW ascertaining different time periods, or 3) inaccurate assignment of metabolic equivalents to self-reported activities. Small sample sizes, use of correlation coefficients, and limited information on individual validity were problematic. Future research should address these issues to clarify the true validity of PAQs for estimating AEE.
Collapse
Affiliation(s)
- Heather K Neilson
- Division of Population Health and Information, Alberta Cancer Board, Calgary, Canada.
| | | | | | | |
Collapse
|