51
|
Raittio E, Sofi-Mahmudi A, Uribe SE. Research transparency in dental research: A programmatic analysis. Eur J Oral Sci 2023; 131:e12908. [PMID: 36482006 PMCID: PMC10108147 DOI: 10.1111/eos.12908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 11/14/2022] [Indexed: 12/13/2022]
Abstract
We assessed adherence to five transparency practices-data sharing, code sharing, conflict of interest disclosure, funding disclosure, and protocol registration-in articles in dental journals. We searched and exported the full text of all research articles from PubMed-indexed dental journals available in the Europe PubMed Central database until the end of 2021. We programmatically assessed their adherence to the five transparency practices using a validated and automated tool. Journal- and article-related information was retrieved from ScimagoJR and Journal Citation Reports. Of all 329,784 articles published in PubMed-indexed dental journals, 10,659 (3.2%) were available to download. Of those, 77% included a conflict of interest disclosure, and 62% included a funding disclosure. Seven percent of the articles had a registered protocol. Data sharing (2.0%) and code sharing (0.1%) were rarer. Sixteen percent of articles did not adhere to any of the five transparency practices, 29% adhered to one, 48% adhered to two, 7.0% adhered to three, 0.3% adhered to four, and no article adhered to all five practices. Adherence to transparency practices increased over time; however, data and code sharing especially remained rare. Coordinated efforts involving all stakeholders are needed to change current transparency practices in dental research.
Collapse
Affiliation(s)
- Eero Raittio
- Institute of Dentistry, University of Eastern Finland, Kuopio, Finland.,Department of Dentistry and Oral Health, Aarhus University, Aarhus, Denmark
| | - Ahmad Sofi-Mahmudi
- Seqiz Health Network, Kurdistan University of Medical Sciences, Seqiz, Kurdistan, Iran.,Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada
| | - Sergio E Uribe
- Department of Conservative Dentistry and Oral Health, Riga Stradins University, Riga, Latvia.,School of Dentistry, Universidad Austral de Chile, Valdivia, Chile.,Baltic Biomaterials Centre of Excellence, Headquarters at Riga Technical University, Riga, Latvia
| |
Collapse
|
52
|
Serghiou S, Axfors C, Ioannidis JPA. Lessons learnt from registration of biomedical research. Nat Hum Behav 2023; 7:9-12. [PMID: 36604496 DOI: 10.1038/s41562-022-01499-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Affiliation(s)
- Stylianos Serghiou
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, USA.,Meta-Research Innovation Center at Stanford (METRICS), Stanford School of Medicine, Stanford, CA, USA.,Prolaio, Inc., Scottsdale, AZ, USA
| | - Cathrine Axfors
- Meta-Research Innovation Center at Stanford (METRICS), Stanford School of Medicine, Stanford, CA, USA
| | - John P A Ioannidis
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, USA. .,Meta-Research Innovation Center at Stanford (METRICS), Stanford School of Medicine, Stanford, CA, USA. .,Stanford Prevention Research Center, Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA. .,Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA. .,Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA, USA.
| |
Collapse
|
53
|
Hamilton DG, Page MJ, Finch S, Everitt S, Fidler F. How often do cancer researchers make their data and code available and what factors are associated with sharing? BMC Med 2022; 20:438. [PMID: 36352426 PMCID: PMC9646258 DOI: 10.1186/s12916-022-02644-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 10/31/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Various stakeholders are calling for increased availability of data and code from cancer research. However, it is unclear how commonly these products are shared, and what factors are associated with sharing. Our objective was to evaluate how frequently oncology researchers make data and code available and explore factors associated with sharing. METHODS A cross-sectional analysis of a random sample of 306 cancer-related articles indexed in PubMed in 2019 which studied research subjects with a cancer diagnosis was performed. All articles were independently screened for eligibility by two authors. Outcomes of interest included the prevalence of affirmative sharing declarations and the rate with which declarations connected to data complying with key FAIR principles (e.g. posted to a recognised repository, assigned an identifier, data license outlined, non-proprietary formatting). We also investigated associations between sharing rates and several journal characteristics (e.g. sharing policies, publication models), study characteristics (e.g. cancer rarity, study design), open science practices (e.g. pre-registration, pre-printing) and subsequent citation rates between 2020 and 2021. RESULTS One in five studies declared data were publicly available (59/306, 19%, 95% CI: 15-24%). However, when data availability was investigated this percentage dropped to 16% (49/306, 95% CI: 12-20%), and then to less than 1% (1/306, 95% CI: 0-2%) when data were checked for compliance with key FAIR principles. While only 4% of articles that used inferential statistics reported code to be available (10/274, 95% CI: 2-6%), the odds of reporting code to be available were 5.6 times higher for researchers who shared data. Compliance with mandatory data and code sharing policies was observed in 48% (14/29) and 0% (0/6) of articles, respectively. However, 88% of articles (45/51) included data availability statements when required. Policies that encouraged data sharing did not appear to be any more effective than not having a policy at all. The only factors associated with higher rates of data sharing were studying rare cancers and using publicly available data to complement original research. CONCLUSIONS Data and code sharing in oncology occurs infrequently, and at a lower rate than would be expected given the prevalence of mandatory sharing policies. There is also a large gap between those declaring data to be available, and those archiving data in a way that facilitates its reuse. We encourage journals to actively check compliance with sharing policies, and researchers consult community-accepted guidelines when archiving the products of their research.
Collapse
Affiliation(s)
- Daniel G Hamilton
- MetaMelb Research Group, School of BioSciences, University of Melbourne, Melbourne, Australia.
- Melbourne Medical School, Faculty of Medicine, Dentistry & Health Sciences, University of Melbourne, Melbourne, Australia.
| | - Matthew J Page
- School of Public Health & Preventive Medicine, Monash University, Melbourne, Australia
| | - Sue Finch
- Melbourne Statistical Consulting Platform, School of Mathematics and Statistics, University of Melbourne, Melbourne, Australia
| | - Sarah Everitt
- Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Australia
| | - Fiona Fidler
- MetaMelb Research Group, School of BioSciences, University of Melbourne, Melbourne, Australia
- School of Historical and Philosophical Studies, University of Melbourne, Melbourne, Australia
| |
Collapse
|
54
|
Zavalis EA, Ioannidis JPA. A meta-epidemiological assessment of transparency indicators of infectious disease models. PLoS One 2022; 17:e0275380. [PMID: 36206207 PMCID: PMC9543956 DOI: 10.1371/journal.pone.0275380] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/15/2022] [Indexed: 01/04/2023] Open
Abstract
Mathematical models have become very influential, especially during the COVID-19 pandemic. Data and code sharing are indispensable for reproducing them, protocol registration may be useful sometimes, and declarations of conflicts of interest (COIs) and of funding are quintessential for transparency. Here, we evaluated these features in publications of infectious disease-related models and assessed whether there were differences before and during the COVID-19 pandemic and for COVID-19 models versus models for other diseases. We analysed all PubMed Central open access publications of infectious disease models published in 2019 and 2021 using previously validated text mining algorithms of transparency indicators. We evaluated 1338 articles: 216 from 2019 and 1122 from 2021 (of which 818 were on COVID-19); almost a six-fold increase in publications within the field. 511 (39.2%) were compartmental models, 337 (25.2%) were time series, 279 (20.9%) were spatiotemporal, 186 (13.9%) were agent-based and 25 (1.9%) contained multiple model types. 288 (21.5%) articles shared code, 332 (24.8%) shared data, 6 (0.4%) were registered, and 1197 (89.5%) and 1109 (82.9%) contained COI and funding statements, respectively. There was no major changes in transparency indicators between 2019 and 2021. COVID-19 articles were less likely to have funding statements and more likely to share code. Further validation was performed by manual assessment of 10% of the articles identified by text mining as fulfilling transparency indicators and of 10% of the articles lacking them. Correcting estimates for validation performance, 26.0% of papers shared code and 41.1% shared data. On manual assessment, 5/6 articles identified as registered had indeed been registered. Of articles containing COI and funding statements, 95.8% disclosed no conflict and 11.7% reported no funding. Transparency in infectious disease modelling is relatively low, especially for data and code sharing. This is concerning, considering the nature of this research and the heightened influence it has acquired.
Collapse
Affiliation(s)
- Emmanuel A. Zavalis
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America
- Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Solna, Stockholm, Sweden
| | - John P. A. Ioannidis
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, Stanford University, Stanford, California, United States of America
- * E-mail:
| |
Collapse
|
55
|
Vallet N, Michonneau D, Tournier S. Toward practical transparent verifiable and long-term reproducible research using Guix. Sci Data 2022; 9:597. [PMID: 36195618 PMCID: PMC9532446 DOI: 10.1038/s41597-022-01720-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 09/26/2022] [Indexed: 11/09/2022] Open
Abstract
Reproducibility crisis urge scientists to promote transparency which allows peers to draw same conclusions after performing identical steps from hypothesis to results. Growing resources are developed to open the access to methods, data and source codes. Still, the computational environment, an interface between data and source code running analyses, is not addressed. Environments are usually described with software and library names associated with version labels or provided as an opaque container image. This is not enough to describe the complexity of the dependencies on which they rely to operate on. We describe this issue and illustrate how open tools like Guix can be used by any scientist to share their environment and allow peers to reproduce it. Some steps of research might not be fully reproducible, but at least, transparency for computation is technically addressable. These tools should be considered by scientists willing to promote transparency and open science.
Collapse
Affiliation(s)
- Nicolas Vallet
- Université de Paris, INSERM U976, F-75010, Paris, France.
| | - David Michonneau
- Université de Paris, INSERM U976, F-75010, Paris, France.,Hematology Transplantation, Saint Louis hospital, 1 avenue Claude Vellefaux, 75010, Paris, France
| | - Simon Tournier
- Université de Paris, INSERM US53, CNRS UAR 2030, Saint Louis Research Institute, 1 avenue Claude Vellefaux, 75010, Paris, France
| |
Collapse
|
56
|
DeVito NJ, Morton C, Cashin AG, Richards GC, Lee H. Sharing study materials in health and medical research. BMJ Evid Based Med 2022:bmjebm-2022-111987. [PMID: 36162960 DOI: 10.1136/bmjebm-2022-111987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/03/2022] [Indexed: 11/04/2022]
Abstract
Making study materials available allows for a more comprehensive understanding of the scientific literature. Sharing can take many forms and include a wide variety of outputs including code and data. Biomedical research can benefit from increased transparency but faces unique challenges for sharing, for instance, confidentiality concerns around participants' medical data. Both general and specialised repositories exist to aid in sharing most study materials. Sharing may also require skills and resources to ensure that it is done safely and effectively. Educating researchers on how to best share their materials, and properly rewarding these practices, requires action from a variety of stakeholders including journals, funders and research institutions.
Collapse
Affiliation(s)
- Nicholas J DeVito
- Bennett Institute for Applied Data Science, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Caroline Morton
- Bennett Institute for Applied Data Science, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Aidan Gregory Cashin
- School of Health Sciences, University of New South Wales, Sydney, New South Wales, Australia
- Centre for Pain IMPACT, Neuroscience Research Australia, Randwick, New South Wales, Australia
| | - Georgia C Richards
- Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Hopin Lee
- Centre for Statistics in Medicine & Rehabilitation Research in Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, Oxford, Oxfordshire, UK
- School of Medicine and Public Health, The University of Newcastle, Callaghan, New South Wales, Australia
| |
Collapse
|
57
|
Cadwallader L, Hrynaszkiewicz I. A survey of researchers' code sharing and code reuse practices, and assessment of interactive notebook prototypes. PeerJ 2022; 10:e13933. [PMID: 36032954 PMCID: PMC9406794 DOI: 10.7717/peerj.13933] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 08/01/2022] [Indexed: 01/19/2023] Open
Abstract
This research aimed to understand the needs and habits of researchers in relation to code sharing and reuse; gather feedback on prototype code notebooks created by NeuroLibre; and help determine strategies that publishers could use to increase code sharing. We surveyed 188 researchers in computational biology. Respondents were asked about how often and why they look at code, which methods of accessing code they find useful and why, what aspects of code sharing are important to them, and how satisfied they are with their ability to complete these tasks. Respondents were asked to look at a prototype code notebook and give feedback on its features. Respondents were also asked how much time they spent preparing code and if they would be willing to increase this to use a code sharing tool, such as a notebook. As a reader of research articles the most common reason (70%) for looking at code was to gain a better understanding of the article. The most commonly encountered method for code sharing-linking articles to a code repository-was also the most useful method of accessing code from the reader's perspective. As authors, the respondents were largely satisfied with their ability to carry out tasks related to code sharing. The most important of these tasks were ensuring that the code was running in the correct environment, and sharing code with good documentation. The average researcher, according to our results, is unwilling to incur additional costs (in time, effort or expenditure) that are currently needed to use code sharing tools alongside a publication. We infer this means we need different models for funding and producing interactive or executable research outputs if they are to reach a large number of researchers. For the purpose of increasing the amount of code shared by authors, PLOS Computational Biology is, as a result, focusing on policy rather than tools.
Collapse
|
58
|
Raittio E, Sofi-Mahmudi A, Shamsoddin E. The use of the phrase "data not shown" in dental research. PLoS One 2022; 17:e0272695. [PMID: 35944050 PMCID: PMC9362922 DOI: 10.1371/journal.pone.0272695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 07/15/2022] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVE The use of phrases such as "data/results not shown" is deemed an obscure way to represent scientific findings. Our aim was to investigate how frequently papers published in dental journals use the phrases and what kind of results the authors referred to with these phrases in 2021. METHODS We searched the Europe PubMed Central (PMC) database for open-access articles available from studies published in PubMed-indexed dental journals until December 31st, 2021. We searched for "data/results not shown" phrases from the full texts and then calculated the proportion of articles with the phrases in all the available articles. From studies published in 2021, we evaluated whether the phrases referred to confirmatory results, negative results, peripheral results, sensitivity analysis results, future results, or other/unclear results. Journal- and publisher-related differences in publishing studies with the phrases in 2021 were tested with Fisher's exact test using the R v4.1.1 software. RESULTS The percentage of studies with the relevant phrases from the total number of studies in the database decreased from 13% to 3% between 2010 and 2020. In 2021, out of 2,434 studies published in 73 different journals by eight publishers, 67 (2.8%) used the phrases. Potential journal- and publisher-related differences in publishing studies with the phrases were detected in 2021 (p = 0.001 and p = 0.005, respectively). Most commonly, the phrases referred to negative (n = 16, 24%), peripheral (n = 22, 33%) or confirmatory (n = 11, 16%) results. The significance of unpublished results to which the phrases referred considerably varied across studies. CONCLUSION Over the last decade, there has been a marked decrease in the use of the phrases "data/results not shown" in dental journals. However, the phrases were still notably in use in dental studies in 2021, despite the good availability of accessible free online supplements and repositories.
Collapse
Affiliation(s)
- Eero Raittio
- Institute of Dentistry, University of Eastern Finland, Kuopio, Finland
| | - Ahmad Sofi-Mahmudi
- Cochrane Iran Associate Centre, National Institute for Medical Research Development (NIMAD), Tehran, Iran
- Seqiz Health Network, Kurdistan University of Medical Sciences, Seqiz, Kurdistan, Iran
| | - Erfan Shamsoddin
- Cochrane Iran Associate Centre, National Institute for Medical Research Development (NIMAD), Tehran, Iran
| |
Collapse
|
59
|
Hansford HJ, Cashin AG, Bagg MK, Wewege MA, Ferraro MC, Kianersi S, Mayo-Wilson E, Grant SP, Toomey E, Skinner IW, McAuley JH, Lee H, Jones MD. Feasibility of an Audit and Feedback Intervention to Facilitate Journal Policy Change Towards Greater Promotion of Transparency and Openness in Sports Science Research. SPORTS MEDICINE - OPEN 2022; 8:101. [PMID: 35932429 PMCID: PMC9357245 DOI: 10.1186/s40798-022-00496-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 07/21/2022] [Indexed: 11/10/2022]
Abstract
OBJECTIVES To evaluate (1) the feasibility of an audit-feedback intervention to facilitate sports science journal policy change, (2) the reliability of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) policy evaluation form, and (3) the extent to which policies of sports science journals support transparent and open research practices. METHODS We conducted a cross-sectional, audit-feedback, feasibility study of transparency and openness standards of the top 38 sports science journals by impact factor. The TRUST form was used to evaluate journal policies support for transparent and open research practices. Feedback was provided to journal editors in the format of a tailored letter. Inter-rater reliability and agreement of the TRUST form was assessed using intraclass correlation coefficients and the standard error of measurement, respectively. Time-based criteria, fidelity of intervention delivery and qualitative feedback were used to determine feasibility. RESULTS The audit-feedback intervention was feasible based on the time taken to rate journals and provide tailored feedback. The mean (SD) score on the TRUST form (range 0-27) was 2.05 (1.99), reflecting low engagement with transparent and open practices. Inter-rater reliability of the overall score of the TRUST form was moderate [ICC (2,1) = 0.68 (95% CI 0.55-0.79)], with standard error of measurement of 1.17. However, some individual items had poor reliability. CONCLUSION Policies of the top 38 sports science journals have potential for improved support for transparent and open research practices. The feasible audit-feedback intervention developed here warrants large-scale evaluation as a means to facilitate change in journal policies. REGISTRATION OSF ( https://osf.io/d2t4s/ ).
Collapse
Affiliation(s)
- Harrison J Hansford
- School of Health Sciences, Faculty of Medicine and Health, UNSW Sydney, Sydney, Australia
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia
| | - Aidan G Cashin
- School of Health Sciences, Faculty of Medicine and Health, UNSW Sydney, Sydney, Australia
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia
| | - Matthew K Bagg
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia
- Faculty of Health Sciences, Curtin Health Innovation Research Institute, Curtin University, Perth, Australia
- Perron Institute for Neurological and Translational Science, Perth, Australia
| | - Michael A Wewege
- School of Health Sciences, Faculty of Medicine and Health, UNSW Sydney, Sydney, Australia
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia
| | - Michael C Ferraro
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia
| | - Sina Kianersi
- Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, IN, USA
| | - Evan Mayo-Wilson
- Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, IN, USA
| | - Sean P Grant
- Department of Social & Behavioural Sciences, Indiana University Richard M, Fairbanks School of Public Health at Indianapolis, Indianapolis, IN, USA
| | - Elaine Toomey
- Health Research Institute School of Allied Health, University of Limerick, Limerick, Ireland
| | - Ian W Skinner
- School of Allied Health, Charles Sturt University, Exercise and Sport Sciences, Port Macquarie, Australia
| | - James H McAuley
- School of Health Sciences, Faculty of Medicine and Health, UNSW Sydney, Sydney, Australia
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia
| | - Hopin Lee
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
- School of Medicine and Public Health, University of Newcastle, Newcastle, Australia
| | - Matthew D Jones
- School of Health Sciences, Faculty of Medicine and Health, UNSW Sydney, Sydney, Australia.
- Centre for Pain IMPACT, Neuroscience Research Australia, Sydney, Australia.
| |
Collapse
|
60
|
Hardwicke TE, Thibault RT, Kosie JE, Tzavella L, Bendixen T, Handcock SA, Köneke VE, Ioannidis JPA. Post-publication critique at top-ranked journals across scientific disciplines: a cross-sectional assessment of policies and practice. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220139. [PMID: 36039285 PMCID: PMC9399707 DOI: 10.1098/rsos.220139] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 07/19/2022] [Indexed: 06/15/2023]
Abstract
Journals exert considerable control over letters, commentaries and online comments that criticize prior research (post-publication critique). We assessed policies (Study One) and practice (Study Two) related to post-publication critique at 15 top-ranked journals in each of 22 scientific disciplines (N = 330 journals). Two-hundred and seven (63%) journals accepted post-publication critique and often imposed limits on length (median 1000, interquartile range (IQR) 500-1200 words) and time-to-submit (median 12, IQR 4-26 weeks). The most restrictive limits were 175 words and two weeks; some policies imposed no limits. Of 2066 randomly sampled research articles published in 2018 by journals accepting post-publication critique, 39 (1.9%, 95% confidence interval [1.4, 2.6]) were linked to at least one post-publication critique (there were 58 post-publication critiques in total). Of the 58 post-publication critiques, 44 received an author reply, of which 41 asserted that original conclusions were unchanged. Clinical Medicine had the most active culture of post-publication critique: all journals accepted post-publication critique and published the most post-publication critique overall, but also imposed the strictest limits on length (median 400, IQR 400-550 words) and time-to-submit (median 4, IQR 4-6 weeks). Our findings suggest that top-ranked academic journals often pose serious barriers to the cultivation, documentation and dissemination of post-publication critique.
Collapse
Affiliation(s)
- Tom E. Hardwicke
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129-B, 1018 WT Amsterdam, The Netherlands
| | - Robert T. Thibault
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
- School of Psychological Science, University of Bristol, Bristol, UK
| | - Jessica E. Kosie
- Department of Psychology, Princeton University, Princeton, NJ, USA
| | | | - Theiss Bendixen
- Department of the Study of Religion, Aarhus University, Aarhus, UK
| | - Sarah A. Handcock
- Florey Department of Neuroscience and Mental Health, University of Melbourne, Melbourne, Australia
| | | | - John P. A. Ioannidis
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
- Departments of Medicine, Epidemiology and Population Health, Biomedical Data Science, and Statistics, Stanford University, Stanford, CA, USA
- Meta-Research Innovation Center Berlin (METRIC-B), QUEST Center for Transforming Biomedical Research, Berlin Institute of Health, Charité – Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
61
|
Replacing bar graphs of continuous data with more informative graphics: Are we making progress? Clin Sci (Lond) 2022; 136:1139-1156. [PMID: 35822444 PMCID: PMC9366861 DOI: 10.1042/cs20220287] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 07/04/2022] [Accepted: 07/12/2022] [Indexed: 11/17/2022]
Abstract
Recent work has raised awareness about the need to replace bar graphs of continuous data with informative graphs showing the data distribution. The impact of these efforts is not known. The present observational meta-research study examined how often scientists in different fields use various graph types, and assessed whether visualization practices have changed between 2010 and 2020. We developed and validated an automated screening tool, designed to identify bar graphs of counts or proportions, bar graphs of continuous data, bar graphs with dot plots, dot plots, box plots, violin plots, histograms, pie charts, and flow charts. Papers from 23 fields (approximately 1000 papers/field per year) were randomly selected from PubMed Central and screened (n=227998). F1 scores for different graphs ranged between 0.83 and 0.95 in the internal validation set. While the tool also performed well in external validation sets, F1 scores were lower for uncommon graphs. Bar graphs are more often used incorrectly to display continuous data than they are used correctly to display counts or proportions. The proportion of papers that use bar graphs of continuous data varies markedly across fields (range in 2020: 4–58%), with high rates in biochemistry and cell biology, complementary and alternative medicine, physiology, genetics, oncology and carcinogenesis, pharmacology, microbiology and immunology. Visualization practices have improved in some fields in recent years. Fewer than 25% of papers use flow charts, which provide information about attrition and the risk of bias. The present study highlights the need for continued interventions to improve visualization and identifies fields that would benefit most.
Collapse
|
62
|
Menke J, Eckmann P, Ozyurt IB, Roelandse M, Anderson N, Grethe J, Gamst A, Bandrowski A. Establishing Institutional Scores With the Rigor and Transparency Index: Large-scale Analysis of Scientific Reporting Quality. J Med Internet Res 2022; 24:e37324. [PMID: 35759334 PMCID: PMC9274430 DOI: 10.2196/37324] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/10/2022] [Accepted: 05/23/2022] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND Improving rigor and transparency measures should lead to improvements in reproducibility across the scientific literature; however, the assessment of measures of transparency tends to be very difficult if performed manually. OBJECTIVE This study addresses the enhancement of the Rigor and Transparency Index (RTI, version 2.0), which attempts to automatically assess the rigor and transparency of journals, institutions, and countries using manuscripts scored on criteria found in reproducibility guidelines (eg, Materials Design, Analysis, and Reporting checklist criteria). METHODS The RTI tracks 27 entity types using natural language processing techniques such as Bidirectional Long Short-term Memory Conditional Random Field-based models and regular expressions; this allowed us to assess over 2 million papers accessed through PubMed Central. RESULTS Between 1997 and 2020 (where data were readily available in our data set), rigor and transparency measures showed general improvement (RTI 2.29 to 4.13), suggesting that authors are taking the need for improved reporting seriously. The top-scoring journals in 2020 were the Journal of Neurochemistry (6.23), British Journal of Pharmacology (6.07), and Nature Neuroscience (5.93). We extracted the institution and country of origin from the author affiliations to expand our analysis beyond journals. Among institutions publishing >1000 papers in 2020 (in the PubMed Central open access set), Capital Medical University (4.75), Yonsei University (4.58), and University of Copenhagen (4.53) were the top performers in terms of RTI. In country-level performance, we found that Ethiopia and Norway consistently topped the RTI charts of countries with 100 or more papers per year. In addition, we tested our assumption that the RTI may serve as a reliable proxy for scientific replicability (ie, a high RTI represents papers containing sufficient information for replication efforts). Using work by the Reproducibility Project: Cancer Biology, we determined that replication papers (RTI 7.61, SD 0.78) scored significantly higher (P<.001) than the original papers (RTI 3.39, SD 1.12), which according to the project required additional information from authors to begin replication efforts. CONCLUSIONS These results align with our view that RTI may serve as a reliable proxy for scientific replicability. Unfortunately, RTI measures for journals, institutions, and countries fall short of the replicated paper average. If we consider the RTI of these replication studies as a target for future manuscripts, more work will be needed to ensure that the average manuscript contains sufficient information for replication attempts.
Collapse
Affiliation(s)
- Joe Menke
- Center for Research in Biological Systems, University of California, San Diego, La Jolla, CA, United States
- SciCrunch Inc., San Diego, CA, United States
| | - Peter Eckmann
- SciCrunch Inc., San Diego, CA, United States
- Department of Neuroscience, University of California, San Diego, La Jolla, CA, United States
| | - Ibrahim Burak Ozyurt
- SciCrunch Inc., San Diego, CA, United States
- Department of Neuroscience, University of California, San Diego, La Jolla, CA, United States
| | | | | | - Jeffrey Grethe
- SciCrunch Inc., San Diego, CA, United States
- Department of Neuroscience, University of California, San Diego, La Jolla, CA, United States
| | - Anthony Gamst
- Department of Mathematics, University of California, San Diego, CA, United States
| | - Anita Bandrowski
- SciCrunch Inc., San Diego, CA, United States
- Department of Neuroscience, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
63
|
Schulz R, Barnett A, Bernard R, Brown NJL, Byrne JA, Eckmann P, Gazda MA, Kilicoglu H, Prager EM, Salholz-Hillel M, Ter Riet G, Vines T, Vorland CJ, Zhuang H, Bandrowski A, Weissgerber TL. Is the future of peer review automated? BMC Res Notes 2022; 15:203. [PMID: 35690782 PMCID: PMC9188010 DOI: 10.1186/s13104-022-06080-6] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/18/2022] [Indexed: 12/19/2022] Open
Abstract
The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors' conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study's conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.
Collapse
Affiliation(s)
- Robert Schulz
- BIH QUEST Center for Responsible Research, Berlin Institute of Health at Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Adrian Barnett
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health & Social Work, Queensland University of Technology, Brisbane, QLD, Australia
| | - René Bernard
- NeuroCure Cluster of Excellence, Charité Universitätsmedizin Berlin, Berlin, Germany
| | | | - Jennifer A Byrne
- Faculty of Medicine and Health, New South Wales Health Pathology, The University of Sydney, New South Wales, Australia
| | - Peter Eckmann
- Department of Neuroscience, University of California, San Diego, La Jolla, CA, USA
| | - Małgorzata A Gazda
- UMR 3525, Institut Pasteur, Université de Paris, CNRS, INSERM UA12, Comparative Functional Genomics group, Paris, France
| | - Halil Kilicoglu
- School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL, USA
| | - Eric M Prager
- Translational Research and Development, Cohen Veterans Bioscience, New York, NY, USA
| | - Maia Salholz-Hillel
- BIH QUEST Center for Responsible Research, Berlin Institute of Health at Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Gerben Ter Riet
- Faculty of Health, Center of Expertise Urban Vitality, Amsterdam University of Applied Science, Amsterdam, The Netherlands
| | - Timothy Vines
- DataSeer Research Data Services Ltd, Vancouver, BC, Canada
| | - Colby J Vorland
- Indiana University School of Public Health-Bloomington, Bloomington, IN, USA
| | - Han Zhuang
- School of Information Studies, Syracuse University, Syracuse, NY, USA
| | - Anita Bandrowski
- Department of Neuroscience, University of California, San Diego, La Jolla, CA, USA
| | - Tracey L Weissgerber
- BIH QUEST Center for Responsible Research, Berlin Institute of Health at Charité Universitätsmedizin Berlin, Berlin, Germany.
| |
Collapse
|
64
|
Uribe SE, Sofi-Mahmudi A, Raittio E, Maldupa I, Vilne B. Dental Research Data Availability and Quality According to the FAIR Principles. J Dent Res 2022; 101:1307-1313. [PMID: 35656591 PMCID: PMC9516597 DOI: 10.1177/00220345221101321] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
According to the FAIR principles, data produced by scientific research should be findable, accessible, interoperable, and reusable-for instance, to be used in machine learning algorithms. However, to date, there is no estimate of the quantity or quality of dental research data evaluated via the FAIR principles. We aimed to determine the availability of open data in dental research and to assess compliance with the FAIR principles (or FAIRness) of shared dental research data. We downloaded all available articles published in PubMed-indexed dental journals from 2016 to 2021 as open access from Europe PubMed Central. In addition, we took a random sample of 500 dental articles that were not open access through Europe PubMed Central. We assessed data sharing in the articles and compliance of shared data to the FAIR principles programmatically. Results showed that of 7,509 investigated articles, 112 (1.5%) shared data. The average (SD) level of compliance with the FAIR metrics was 32.6% (31.9%). The average for each metric was as follows: findability, 3.4 (2.7) of 7; accessibility, 1.0 (1.0) of 3; interoperability, 1.1 (1.2) of 4; and reusability, 2.4 (2.6) of 10. No considerable changes in data sharing or quality of shared data occurred over the years. Our findings indicated that dental researchers rarely shared data, and when they did share, the FAIR quality was suboptimal. Machine learning algorithms could understand 1% of available dental research data. These undermine the reproducibility of dental research and hinder gaining the knowledge that can be gleaned from machine learning algorithms and applications.
Collapse
Affiliation(s)
- S E Uribe
- Bioinformatics Lab, Riga Stradins University, Riga, Latvia.,Department of Conservative Dentistry and Oral Health, Riga Stradins University, Riga, Latvia.,School of Dentistry, Universidad Austral de Chile, Valdivia, Chile.,Baltic Biomaterials Centre of Excellence, Riga Technical University, Riga, Latvia
| | - A Sofi-Mahmudi
- Seqiz Health Network, Kurdistan University of Medical Sciences, Seqiz, Kurdistan.,Cochrane Iran Associate Centre, National Institute for Medical Research Development, Tehran, Iran
| | - E Raittio
- Institute of Dentistry, University of Eastern Finland, Kuopio, Finland
| | - I Maldupa
- Department of Conservative Dentistry and Oral Health, Riga Stradins University, Riga, Latvia
| | - B Vilne
- Bioinformatics Lab, Riga Stradins University, Riga, Latvia
| |
Collapse
|
65
|
de Ridder J. How to trust a scientist. STUDIES IN HISTORY AND PHILOSOPHY OF SCIENCE 2022; 93:11-20. [PMID: 35247820 DOI: 10.1016/j.shpsa.2022.02.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 02/07/2022] [Accepted: 02/10/2022] [Indexed: 06/14/2023]
Abstract
Epistemic trust among scientists is inevitable. There are two questions about this: (1) What is the content of this trust, what do scientists trust each other for? (2) Is such trust epistemically justified? I argue that if we assume a traditional answer to (1), namely that scientists trust each other to be reliable informants, then the answer to question (2) is negative, certainly for the biomedical and social sciences. This motivates a different construal of trust among scientists and therefore a different answer to (1): scientists trust each other to only testify to claims that are backed by evidence gathered in accordance with prevailing methodological standards. On this answer, trust among scientists is epistemically justified.
Collapse
Affiliation(s)
- Jeroen de Ridder
- Department of Philosophy, Vrije Universiteit Amsterdam, the Netherlands.
| |
Collapse
|
66
|
Osborne C, Norris E. Pre-registration as behaviour: developing an evidence-based intervention specification to increase pre-registration uptake by researchers using the Behaviour Change Wheel. COGENT PSYCHOLOGY 2022. [DOI: 10.1080/23311908.2022.2066304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
Affiliation(s)
- Christopher Osborne
- Institute of Human Sciences, School of Anthropology and Museum Ethnography, University of Oxford, UK
| | - Emma Norris
- Health Behaviour Change Research Group, Department of Health Sciences, Brunel University London, London, UK
| |
Collapse
|
67
|
Sofi-Mahmudi A, Raittio E. Transparency of COVID-19-Related Research in Dental Journals. FRONTIERS IN ORAL HEALTH 2022; 3:871033. [PMID: 35464778 PMCID: PMC9019132 DOI: 10.3389/froh.2022.871033] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
Objective We aimed to assess the adherence to transparency practices (data availability, code availability, statements of protocol registration and conflicts of interest and funding disclosures) and FAIRness (Findable, Accessible, Interoperable, and Reusable) of shared data from open access COVID-19-related articles published in dental journals available from the Europe PubMed Central (PMC) database. Methods We searched and exported all COVID-19-related open-access articles from PubMed-indexed dental journals available in the Europe PMC database in 2020 and 2021. We detected transparency indicators with a validated and automated tool developed to extract the indicators from the downloaded articles. Basic journal- and article-related information was retrieved from the PMC database. Then, from those which had shared data, we assessed their accordance with FAIR data principles using the F-UJI online tool (f-uji.net). Results Of 650 available articles published in 59 dental journals, 74% provided conflicts of interest disclosure and 40% funding disclosure and 4% were preregistered. One study shared raw data (0.15%) and no study shared code. Transparent practices were more common in articles published in journals with higher impact factors, and in 2020 than in 2021. Adherence to the FAIR principles in the only paper that shared data was moderate. Conclusion While the majority of the papers had a COI disclosure, the prevalence of the other transparency practices was far from the acceptable level. A much stronger commitment to open science practices, particularly to preregistration, data and code sharing, is needed from all stakeholders.
Collapse
Affiliation(s)
- Ahmad Sofi-Mahmudi
- Seqiz Health Network, Kurdistan University of Medical Sciences, Sanandaj, Iran
- Cochrane Iran Associate Centre, National Institute for Medical Research Development, Tehran, Iran
| | - Eero Raittio
- Institute of Dentistry, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
68
|
Abstract
The risk of accidental or deliberate misuse of biological research is increasing as biotechnology advances. As open science becomes widespread, we must consider its impact on those risks and develop solutions that ensure security while facilitating scientific progress. Here, we examine the interaction between open science practices and biosecurity and biosafety to identify risks and opportunities for risk mitigation. Increasing the availability of computational tools, datasets, and protocols could increase risks from research with misuse potential. For instance, in the context of viral engineering, open code, data, and materials may increase the risk of release of enhanced pathogens. For this dangerous subset of research, both open science and biosecurity goals may be achieved by using access-controlled repositories or application programming interfaces. While preprints accelerate dissemination of findings, their increased use could challenge strategies for risk mitigation at the publication stage. This highlights the importance of oversight earlier in the research lifecycle. Preregistration of research, a practice promoted by the open science community, provides an opportunity for achieving biosecurity risk assessment at the conception of research. Open science and biosecurity experts have an important role to play in enabling responsible research with maximal societal benefit.
Collapse
Affiliation(s)
- James Andrew Smith
- Botnar Research Centre and Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, United Kingdom
- National Institute for Health Research Oxford Biomedical Research Centre, John Radcliffe Hospital, Oxford, United Kingdom
| | - Jonas B. Sandbrink
- Nuffield Department of Medicine, University of Oxford, Oxford, United Kingdom
- Future of Humanity Institute, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
69
|
White NM, Balasubramaniam T, Nayak R, Barnett AG. An observational analysis of the trope "A p-value of < 0.05 was considered statistically significant" and other cut-and-paste statistical methods. PLoS One 2022; 17:e0264360. [PMID: 35263374 PMCID: PMC8906599 DOI: 10.1371/journal.pone.0264360] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 02/08/2022] [Indexed: 11/19/2022] Open
Abstract
Appropriate descriptions of statistical methods are essential for evaluating research quality and reproducibility. Despite continued efforts to improve reporting in publications, inadequate descriptions of statistical methods persist. At times, reading statistical methods sections can conjure feelings of dèjá vu, with content resembling cut-and-pasted or "boilerplate text" from already published work. Instances of boilerplate text suggest a mechanistic approach to statistical analysis, where the same default methods are being used and described using standardized text. To investigate the extent of this practice, we analyzed text extracted from published statistical methods sections from PLOS ONE and the Australian and New Zealand Clinical Trials Registry (ANZCTR). Topic modeling was applied to analyze data from 111,731 papers published in PLOS ONE and 9,523 studies registered with the ANZCTR. PLOS ONE topics emphasized definitions of statistical significance, software and descriptive statistics. One in three PLOS ONE papers contained at least 1 sentence that was a direct copy from another paper. 12,675 papers (11%) closely matched to the sentence "a p-value < 0.05 was considered statistically significant". Common topics across ANZCTR studies differentiated between study designs and analysis methods, with matching text found in approximately 3% of sections. Our findings quantify a serious problem affecting the reporting of statistical methods and shed light on perceptions about the communication of statistics as part of the scientific process. Results further emphasize the importance of rigorous statistical review to ensure that adequate descriptions of methods are prioritized over relatively minor details such as p-values and software when reporting research outcomes.
Collapse
Affiliation(s)
- Nicole M. White
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Australia
| | | | - Richi Nayak
- Centre for Data Science, School of Computer Science, Queensland University of Technology, Brisbane, Australia
| | - Adrian G. Barnett
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Australia
| |
Collapse
|
70
|
Heinl C, Scholman-Végh AMD, Mellor D, Schönfelder G, Strech D, Chamuleau S, Bert B. Declaration of common standards for the preregistration of animal research-speeding up the scientific progress. PNAS NEXUS 2022; 1:pgac016. [PMID: 36712788 PMCID: PMC9802105 DOI: 10.1093/pnasnexus/pgac016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 02/09/2022] [Accepted: 02/23/2022] [Indexed: 02/01/2023]
Abstract
Preregistration of studies is a recognized tool in clinical research to improve the quality and reporting of all gained results. In preclinical research, preregistration could boost the translation of published results into clinical breakthroughs. When studies rely on animal testing or form the basis of clinical trials, maximizing the validity and reliability of research outcomes becomes in addition an ethical obligation. Nevertheless, the implementation of preregistration in animal research is still slow. However, research institutions, funders, and publishers start valuing preregistration, and thereby level the way for its broader acceptance in the future. A total of 3 public registries, the OSF registry, preclinicaltrials.eu, and animalstudyregistry.org already encourage the preregistration of research involving animals. Here, they jointly declare common standards to make preregistration a valuable tool for better science. Registries should meet the following criteria: public accessibility, transparency in their financial sources, tracking of changes, and warranty and sustainability of data. Furthermore, registration templates should cover a minimum set of mandatory information and studies have to be uniquely identifiable. Finally, preregistered studies should be linked to any published outcome. To ensure that preregistration becomes a powerful instrument, publishers, funders, and institutions should refer to registries that fulfill these minimum standards.
Collapse
Affiliation(s)
- Céline Heinl
- German Federal Institute for Risk Assessment, German Centre for the Protection of Laboratory Animals (Bf3R), 10589 Berlin, Germany
| | | | - David Mellor
- Center for Open Science, Charlottesville, VA, 22903-5083, USA
| | - Gilbert Schönfelder
- German Federal Institute for Risk Assessment, German Centre for the Protection of Laboratory Animals (Bf3R), 10589 Berlin, Germany
- Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, 10117 Berlin, Germany
| | - Daniel Strech
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH) at Charité, 10178 Berlin, Germany
| | - Steven Chamuleau
- Department of Cardiology, Amsterdam University Medical Center, 1105 AZ Amsterdam, The Netherlands
| | - Bettina Bert
- German Federal Institute for Risk Assessment, German Centre for the Protection of Laboratory Animals (Bf3R), 10589 Berlin, Germany
| |
Collapse
|
71
|
Data and code availability statements in systematic reviews of interventions were often missing or inaccurate: a content analysis. J Clin Epidemiol 2022; 147:1-10. [DOI: 10.1016/j.jclinepi.2022.03.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 02/22/2022] [Accepted: 03/03/2022] [Indexed: 11/30/2022]
|
72
|
Meid AD. Teaching reproducible research for medical students and postgraduate pharmaceutical scientists. BMC Res Notes 2021; 14:445. [PMID: 34886890 PMCID: PMC8656016 DOI: 10.1186/s13104-021-05862-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 11/26/2021] [Indexed: 11/10/2022] Open
Abstract
In medicine and other academic settings, (doctoral) students often work in interdisciplinary teams together with researchers of pharmaceutical sciences, natural sciences in general, or biostatistics. They should be fundamentally taught good research practices, especially in terms of statistical analysis. This includes reproducibility as a central aspect. Acknowledging that even experienced researchers and supervisors might be unfamiliar with necessary aspects of a perfectly reproducible workflow, a lecture series on reproducible research (RR) was developed for young scientists in clinical pharmacology. The pilot series highlighted definitions of RR, reasons for RR, potential merits of RR, and ways to work accordingly. In trying to actually reproduce a published analysis, several practical obstacles arose. In this article, reproduction of a working example is commented to emphasize the manifold facets of RR, to provide possible explanations for difficulties and solutions, and to argue that harmonized curricula for (quantitative) clinical researchers should include RR principles. These experiences should raise awareness among educators and students, supervisors and young scientists. RR working habits are not only beneficial for ourselves or our students, but also for other researchers within an institution, for scientific partners, for the scientific community, and eventually for the public profiting from research findings.
Collapse
Affiliation(s)
- Andreas D Meid
- Department of Clinical Pharmacology and Pharmacoepidemiology, University of Heidelberg, Im Neuenheimer Feld 410, 69120, Heidelberg, Germany.
| |
Collapse
|
73
|
Errington TM, Denis A, Perfito N, Iorns E, Nosek BA. Challenges for assessing replicability in preclinical cancer biology. eLife 2021. [DOI: 10.10.7554/elife.67995] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
We conducted the Reproducibility Project: Cancer Biology to investigate the replicability of preclinical research in cancer biology. The initial aim of the project was to repeat 193 experiments from 53 high-impact papers, using an approach in which the experimental protocols and plans for data analysis had to be peer reviewed and accepted for publication before experimental work could begin. However, the various barriers and challenges we encountered while designing and conducting the experiments meant that we were only able to repeat 50 experiments from 23 papers. Here we report these barriers and challenges. First, many original papers failed to report key descriptive and inferential statistics: the data needed to compute effect sizes and conduct power analyses was publicly accessible for just 4 of 193 experiments. Moreover, despite contacting the authors of the original papers, we were unable to obtain these data for 68% of the experiments. Second, none of the 193 experiments were described in sufficient detail in the original paper to enable us to design protocols to repeat the experiments, so we had to seek clarifications from the original authors. While authors were extremely or very helpful for 41% of experiments, they were minimally helpful for 9% of experiments, and not at all helpful (or did not respond to us) for 32% of experiments. Third, once experimental work started, 67% of the peer-reviewed protocols required modifications to complete the research and just 41% of those modifications could be implemented. Cumulatively, these three factors limited the number of experiments that could be repeated. This experience draws attention to a basic and fundamental concern about replication – it is hard to assess whether reported findings are credible.
Collapse
|
74
|
Errington TM, Denis A, Perfito N, Iorns E, Nosek BA. Challenges for assessing replicability in preclinical cancer biology. eLife 2021; 10:e67995. [PMID: 34874008 PMCID: PMC8651289 DOI: 10.7554/elife.67995] [Citation(s) in RCA: 102] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 07/20/2021] [Indexed: 02/07/2023] Open
Abstract
We conducted the Reproducibility Project: Cancer Biology to investigate the replicability of preclinical research in cancer biology. The initial aim of the project was to repeat 193 experiments from 53 high-impact papers, using an approach in which the experimental protocols and plans for data analysis had to be peer reviewed and accepted for publication before experimental work could begin. However, the various barriers and challenges we encountered while designing and conducting the experiments meant that we were only able to repeat 50 experiments from 23 papers. Here we report these barriers and challenges. First, many original papers failed to report key descriptive and inferential statistics: the data needed to compute effect sizes and conduct power analyses was publicly accessible for just 4 of 193 experiments. Moreover, despite contacting the authors of the original papers, we were unable to obtain these data for 68% of the experiments. Second, none of the 193 experiments were described in sufficient detail in the original paper to enable us to design protocols to repeat the experiments, so we had to seek clarifications from the original authors. While authors were extremely or very helpful for 41% of experiments, they were minimally helpful for 9% of experiments, and not at all helpful (or did not respond to us) for 32% of experiments. Third, once experimental work started, 67% of the peer-reviewed protocols required modifications to complete the research and just 41% of those modifications could be implemented. Cumulatively, these three factors limited the number of experiments that could be repeated. This experience draws attention to a basic and fundamental concern about replication - it is hard to assess whether reported findings are credible.
Collapse
Affiliation(s)
| | | | | | | | - Brian A Nosek
- Center for Open ScienceCharlottesvilleUnited States
- University of VirginiaCharlottesvilleUnited States
| |
Collapse
|
75
|
Errington TM, Mathur M, Soderberg CK, Denis A, Perfito N, Iorns E, Nosek BA. Investigating the replicability of preclinical cancer biology. eLife 2021; 10:e71601. [PMID: 34874005 PMCID: PMC8651293 DOI: 10.7554/elife.71601] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 10/16/2021] [Indexed: 12/18/2022] Open
Abstract
Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology by repeating selected experiments from high-impact papers. A total of 50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects. Most of the original effects were positive effects (136), with the rest being null effects (22). A majority of the original effect sizes were reported as numerical values (117), with the rest being reported as representative images (41). We employed seven methods to assess replicability, and some of these methods were not suitable for all the effects in our sample. One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original. The other methods were binary - the replication was either a success or a failure - and five of these methods could be used to assess both positive and null effects when effect sizes were reported as numerical values. For positive effects, 40% of replications (39/97) succeeded according to three or more of these five methods, and for null effects 80% of replications (12/15) were successful on this basis; combining positive and null effects, the success rate was 46% (51/112). A successful replication does not definitively confirm an original finding or its theoretical interpretation. Equally, a failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability.
Collapse
Affiliation(s)
| | - Maya Mathur
- Quantitative Sciences Unit, Stanford UniversityStanfordUnited States
| | | | | | | | | | - Brian A Nosek
- Center for Open ScienceCharlottesvilleUnited States
- University of VirginiaCharlottesvilleUnited States
| |
Collapse
|
76
|
Hamilton DG, Fraser H, Fidler F, McDonald S, Rowhani-Farid A, Hong K, Page MJ. Rates and predictors of data and code sharing in the medical and health sciences: Protocol for a systematic review and individual participant data meta-analysis. F1000Res 2021; 10:491. [PMID: 34631024 PMCID: PMC8485098 DOI: 10.12688/f1000research.53874.2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/01/2021] [Indexed: 01/06/2023] Open
Abstract
Numerous studies have demonstrated low but increasing rates of data and code sharing within medical and health research disciplines. However, it remains unclear how commonly data and code are shared across all fields of medical and health research, as well as whether sharing rates are positively associated with implementation of progressive policies by publishers and funders, or growing expectations from the medical and health research community at large. Therefore this systematic review aims to synthesise the findings of medical and health science studies that have empirically investigated the prevalence of data or code sharing, or both. Objectives include the investigation of: (i) the prevalence of public sharing of research data and code alongside published articles (including preprints), (ii) the prevalence of private sharing of research data and code in response to reasonable requests, and (iii) factors associated with the sharing of either research output (e.g., the year published, the publisher's policy on sharing, the presence of a data or code availability statement). It is hoped that the results will provide some insight into how often research data and code are shared publicly and privately, how this has changed over time, and how effective some measures such as the institution of data sharing policies and data availability statements have been in motivating researchers to share their underlying data and code.
Collapse
Affiliation(s)
- Daniel G Hamilton
- MetaMelb Research Group, School of BioSciences, University of Melbourne, Parkville, Victoria, 3010, Australia
| | - Hannah Fraser
- MetaMelb Research Group, School of BioSciences, University of Melbourne, Parkville, Victoria, 3010, Australia
| | - Fiona Fidler
- MetaMelb Research Group, School of BioSciences, University of Melbourne, Parkville, Victoria, 3010, Australia.,School of Historical and Philosophical Studies, University of Melbourne, Parkville, Victoria, 3010, Australia
| | - Steve McDonald
- School of Public Health & Preventive Medicine, Monash University, Melbourne, Victoria, 3004, Australia
| | - Anisa Rowhani-Farid
- Department of Pharmaceutical Health Services Research, University of Maryland, Baltimore, Maryland, 21201, USA
| | - Kyungwan Hong
- Department of Pharmaceutical Health Services Research, University of Maryland, Baltimore, Maryland, 21201, USA
| | - Matthew J Page
- School of Public Health & Preventive Medicine, Monash University, Melbourne, Victoria, 3004, Australia
| |
Collapse
|
77
|
Burke NL, Frank GKW, Hilbert A, Hildebrandt T, Klump KL, Thomas JJ, Wade TD, Walsh BT, Wang SB, Weissman RS. Open science practices for eating disorders research. Int J Eat Disord 2021; 54:1719-1729. [PMID: 34555191 PMCID: PMC9107337 DOI: 10.1002/eat.23607] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 09/02/2021] [Accepted: 09/03/2021] [Indexed: 11/07/2022]
Abstract
This editorial seeks to encourage the increased application of three open science practices in eating disorders research: Preregistration, Registered Reports, and the sharing of materials, data, and code. For each of these practices, we introduce updated International Journal of Eating Disorders author and reviewer guidance. Updates include the introduction of open science badges; specific instructions about how to improve transparency; and the introduction of Registered Reports of systematic or meta-analytical reviews. The editorial also seeks to encourage the study of open science practices. Open science practices pose considerable time and other resource burdens. Therefore, research is needed to help determine the value of these added burdens and to identify efficient strategies for implementing open science practices.
Collapse
Affiliation(s)
- Natasha L. Burke
- Department of Psychology, Fordham University, Bronx, New York, USA
| | - Guido K. W. Frank
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
| | - Anja Hilbert
- Department of Psychosomatic Medicine and Psychotherapy, Integrated Research and Treatment Center Adiposity Diseases, Behavioral Medicine Research Unit, Leipzig, Germany
| | - Thomas Hildebrandt
- Center of Excellence in Eating and Weight Disorders, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Kelly L. Klump
- Department of Psychology, Michigan State University, East Lansing, Michigan, USA
| | - Jennifer J. Thomas
- Eating Disorders Clinical and Research Program, Massachusetts General Hospital and Department of Psychiatry, Harvard Medical School, Boston, Massachusetts, USA
| | - Tracey D. Wade
- Blackbird Initiative, Órama Institute for Mental Health and Well-Being, Flinders University, Adelaide, South Australia, Australia
| | - B. Timothy Walsh
- New York State Psychiatric Institute and Department of Psychiatry, Columbia University Irving Medical Center, New York, New York, USA
| | - Shirley B. Wang
- Department of Psychology, Harvard University, Cambridge, Massachusetts, USA
| | | |
Collapse
|
78
|
Naudet F, Siebert M, Pellen C, Gaba J, Axfors C, Cristea I, Danchev V, Mansmann U, Ohmann C, Wallach JD, Moher D, Ioannidis JPA. Medical journal requirements for clinical trial data sharing: Ripe for improvement. PLoS Med 2021; 18:e1003844. [PMID: 34695113 PMCID: PMC8575305 DOI: 10.1371/journal.pmed.1003844] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Revised: 11/08/2021] [Indexed: 11/18/2022] Open
Abstract
Florian Naudet and co-authors discuss strengthening requirements for sharing clinical trial data.
Collapse
Affiliation(s)
- Florian Naudet
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d’Investigation Clinique de Rennes)], Rennes, France
- * E-mail:
| | - Maximilian Siebert
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d’Investigation Clinique de Rennes)], Rennes, France
| | - Claude Pellen
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d’Investigation Clinique de Rennes)], Rennes, France
| | - Jeanne Gaba
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d’Investigation Clinique de Rennes)], Rennes, France
| | - Cathrine Axfors
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, California, United States of America
- Department for Women’s and Children’s Health, Uppsala University, Uppsala, Sweden
| | - Ioana Cristea
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Valentin Danchev
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, California, United States of America
- Stanford Prevention Research Center, Stanford University School of Medicine, Stanford, California, United States of America
| | - Ulrich Mansmann
- Ludwig-Maximilians University Munich, Institute for Medical Information Processing, Biometry, and Epidemiology, München, Germany
- Ludwig-Maximilians University Munich, OSCLMU—Open Science Center LMU, München, Germany
| | - Christian Ohmann
- European Clinical Research Infrastructure Network (ECRIN), Düsseldorf, Germany
| | - Joshua D. Wallach
- Department of Environmental Health Sciences, Yale School of Public Health, New Haven, Connecticut, United States of America
| | - David Moher
- Center for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
| | - John P. A. Ioannidis
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, California, United States of America
- Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, Stanford University, California, United States of America
| |
Collapse
|
79
|
Baždarić K, Vrkić I, Arh E, Mavrinac M, Gligora Marković M, Bilić-Zulle L, Stojanovski J, Malički M. Attitudes and practices of open data, preprinting, and peer-review-A cross sectional study on Croatian scientists. PLoS One 2021; 16:e0244529. [PMID: 34153041 PMCID: PMC8216536 DOI: 10.1371/journal.pone.0244529] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 05/10/2021] [Indexed: 11/30/2022] Open
Abstract
Attitudes towards open peer review, open data and use of preprints influence scientists’ engagement with those practices. Yet there is a lack of validated questionnaires that measure these attitudes. The goal of our study was to construct and validate such a questionnaire and use it to assess attitudes of Croatian scientists. We first developed a 21-item questionnaire called Attitudes towards Open data sharing, preprinting, and peer-review (ATOPP), which had a reliable four-factor structure, and measured attitudes towards open data, preprint servers, open peer-review and open peer-review in small scientific communities. We then used the ATOPP to explore attitudes of Croatian scientists (n = 541) towards these topics, and to assess the association of their attitudes with their open science practices and demographic information. Overall, Croatian scientists’ attitudes towards these topics were generally neutral, with a median (Md) score of 3.3 out of max 5 on the scale score. We also found no gender (P = 0.995) or field differences (P = 0.523) in their attitudes. However, attitudes of scientist who previously engaged in open peer-review or preprinting were higher than of scientists that did not (Md 3.5 vs. 3.3, P<0.001, and Md 3.6 vs 3.3, P<0.001, respectively). Further research is needed to determine optimal ways of increasing scientists’ attitudes and their open science practices.
Collapse
Affiliation(s)
- Ksenija Baždarić
- Department of Medical Informatics, Faculty of Medicine, University of Rijeka, Rijeka, Croatia
- * E-mail:
| | - Iva Vrkić
- Department of Geophysics, Faculty of Science, University of Zagreb, Zagreb, Croatia
| | - Evgenia Arh
- Library, Faculty of Medicine, University of Rijeka, Rijeka, Croatia
| | - Martina Mavrinac
- Department of Medical Informatics, Faculty of Medicine, University of Rijeka, Rijeka, Croatia
| | - Maja Gligora Marković
- Department of Medical Informatics, Faculty of Medicine, University of Rijeka, Rijeka, Croatia
| | - Lidija Bilić-Zulle
- Department of Medical Informatics, Faculty of Medicine, University of Rijeka, Rijeka, Croatia
| | - Jadranka Stojanovski
- Department of Information Sciences, University of Zadar, Zadar, Croatia
- Centre for Scientific Information, The Ruđer Bošković Institute, Zagreb, Croatia
| | - Mario Malički
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America
| |
Collapse
|
80
|
Tzanetakis GN, Koletsi D. Trial registration and selective outcome reporting in Endodontic Research: Evidence over a 5-year period. Int Endod J 2021; 54:1794-1803. [PMID: 34013569 DOI: 10.1111/iej.13573] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 05/14/2021] [Accepted: 05/17/2021] [Indexed: 12/19/2022]
Abstract
AIM To assess the prevalence of registration of clinical trials in endodontic research and to identify outcome reporting discrepancies between trial registration entries and respective final publications. Associations with publication characteristics, such as journal, year of publication, origin, number of centres and authors, funding and statistical significance of the findings, were also sought. METHODOLOGY All reports of endodontic randomized controlled trials (RCTs) published in 4 endodontic specialty and 4 general dental journals from 1 January 2016 to 31 December 2020 were identified. Trial registration frequency patterns were assessed for the included RCTs, whilst for the ones registered, outcome reporting discrepancies were recorded. Article characteristics such as year of publication, geographic region, number of centres, authors participating in the publication, funding, type of registration and others were identified. Descriptive statistics and univariable/multivariable logistic regression were performed to examine the effect of study characteristics on identified registration practices. RESULTS One hundred and fifty-five RCTs were included, with the majority published in specialty journals (121/155; 78.1%). A total of 42.6% of the identified RCTs was registered (66/155), mostly retrospectively (38/66; 57.6%). There was strong evidence that each additional year for more recent publication accounted for 1.42 times higher odds of being registered (adjusted Odds Ratio = 1.42; 95%CI: 1.11, 1.80; p = .004). More than 1/3 of registered RCTs presented outcome reporting discrepancies (24/66; 36.4%), whilst such inconsistencies were almost evenly distributed between primary and secondary outcomes. CONCLUSIONS Trial registration policies in endodontic research should be reviewed and active endorsement of prospective registration practices should be prioritized for better clarity and transparency of the disseminated research. Outcome reporting discrepancies shall thus be eliminated, offering increased credibility in research findings and eliminating bias in this respect.
Collapse
Affiliation(s)
- Giorgos N Tzanetakis
- Department of Endodontics, School of Dentistry, National and Kapodistrian University of Athens, Athens, Greece
| | - Despina Koletsi
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Zurich, Switzerland
| |
Collapse
|