1
|
Vasileiou D, Karapiperis C, Baltsavia I, Chasapi A, Ahrén D, Janssen PJ, Iliopoulos I, Promponas VJ, Enright AJ, Ouzounis CA. CGG toolkit: Software components for computational genomics. PLoS Comput Biol 2023; 19:e1011498. [PMID: 37934729 PMCID: PMC10629618 DOI: 10.1371/journal.pcbi.1011498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 09/07/2023] [Indexed: 11/09/2023] Open
Abstract
Public-domain availability for bioinformatics software resources is a key requirement that ensures long-term permanence and methodological reproducibility for research and development across the life sciences. These issues are particularly critical for widely used, efficient, and well-proven methods, especially those developed in research settings that often face funding discontinuities. We re-launch a range of established software components for computational genomics, as legacy version 1.0.1, suitable for sequence matching, masking, searching, clustering and visualization for protein family discovery, annotation and functional characterization on a genome scale. These applications are made available online as open source and include MagicMatch, GeneCAST, support scripts for CoGenT-like sequence collections, GeneRAGE and DifFuse, supported by centrally administered bioinformatics infrastructure funding. The toolkit may also be conceived as a flexible genome comparison software pipeline that supports research in this domain. We illustrate basic use by examples and pictorial representations of the registered tools, which are further described with appropriate documentation files in the corresponding GitHub release.
Collapse
Affiliation(s)
- Dimitrios Vasileiou
- Biological Computation & Process Laboratory, Chemical Process & Energy Resources Institute, Centre for Research & Technology Hellas, Thessalonica, Greece
| | - Christos Karapiperis
- Biological Computation & Process Laboratory, Chemical Process & Energy Resources Institute, Centre for Research & Technology Hellas, Thessalonica, Greece
- Biological Computation & Computational Biology Group, AIIA Lab, School of Informatics, Aristotle University of Thessalonica, Thessalonica, Greece
| | - Ismini Baltsavia
- Computational Biology Group, Faculty of Medicine, University of Crete, Heraklion, Greece
| | - Anastasia Chasapi
- Biological Computation & Process Laboratory, Chemical Process & Energy Resources Institute, Centre for Research & Technology Hellas, Thessalonica, Greece
| | - Dag Ahrén
- Department of Biology, Microbial Ecology Group, Lund University, Lund, Sweden
| | - Paul J. Janssen
- Nuclear Medical Applications, Belgian Nuclear Research Centre SCK CEN, Mol, Belgium
| | - Ioannis Iliopoulos
- Computational Biology Group, Faculty of Medicine, University of Crete, Heraklion, Greece
| | - Vasilis J. Promponas
- Bioinformatics Research Laboratory, Department of Biological Sciences, New Campus, University of Cyprus, Nicosia, Cyprus
| | - Anton J. Enright
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, United Kingdom
| | - Christos A. Ouzounis
- Biological Computation & Process Laboratory, Chemical Process & Energy Resources Institute, Centre for Research & Technology Hellas, Thessalonica, Greece
- Biological Computation & Computational Biology Group, AIIA Lab, School of Informatics, Aristotle University of Thessalonica, Thessalonica, Greece
- SysBioBio.info (SBBI), Thessalonica, Greece
| |
Collapse
|
2
|
Qussini S, MacDonald RS, Shahbal S, Dierickx K. Blinding Models for Scientific Peer-Review of Biomedical Research Proposals: A Systematic Review. J Empir Res Hum Res Ethics 2023; 18:250-262. [PMID: 37526052 DOI: 10.1177/15562646231191424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
Objective: The aim of this systematic review is to estimate: (i) the overall effect of blinding models on bias; (ii) the effect of each blinding model; and (iii) the effect of un-blinding on reviewer's accountability in biomedical research proposals. Methods: Systematic review of prospective or retrospective comparative studies that evaluated two or more peer review blinding models for biomedical research proposals/funding applications and reported outcomes related to peer review efficiency. Results: Three studies that met the inclusion criteria were included in this review and assessed using the QualSyst tool by two authors. Conclusion: Our systematic review is the first to assess peer review blinding models in the context of funding. While only three studies were included, this highlighted the dire need for further RCTs that generate validated evidence. We also discussed multiple aspects of peer review, such as peer review in manuscripts vs proposals and peer review in other fields.
Collapse
Affiliation(s)
- Seba Qussini
- Medical Research Center, Hamad Medical Corporation, Doha, Qatar
| | - Ross S MacDonald
- Distributed eLibrary, Weill Cornell Medicine - Qatar, Education City, Doha, Qatar
| | - Saad Shahbal
- Department of Medicine, Hamad Medical Corporation, Doha, Qatar
| | - Kris Dierickx
- Centre for Biomedical Ethics and Law, Faculty of Medicine, KU Leuven, Leuven, Belgium
| |
Collapse
|
3
|
Ohniwa RL, Takeyasu K, Hibino A. The effectiveness of Japanese public funding to generate emerging topics in life science and medicine. PLoS One 2023; 18:e0290077. [PMID: 37590186 PMCID: PMC10434904 DOI: 10.1371/journal.pone.0290077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 08/01/2023] [Indexed: 08/19/2023] Open
Abstract
Understanding the effectiveness of public funds to generate emerging topics will assist policy makers in promoting innovation. In the present study, we aim to clarify the effectiveness of grants to generate emerging topics in life sciences and medicine since 1991 with regard to Japanese researcher productivity and grants from the Japan Society for the Promotion of Science. To clarify how large grant amounts and which categories are more effective in generating emerging topics from both the PI and investment perspectives, we analyzed awarded PI publications containing emerging keywords (EKs; the elements of emerging topics) before and after funding. Our results demonstrated that, in terms of grant amounts, while PIs tended to generate more EKs with larger grants, the most effective investment from the perspective of investor side was found in the smallest amount range for each PI (less than 5 million JPY /year). Second, in terms of grant categories, we found that grant categories providing smaller amounts for diverse researchers without excellent past performance records were more effective from the investment perspective to generate EK. Our results suggest that offering smaller, widely dispersed grants rather than large, concentrated grants is more effective in promoting the generation of emerging topics in life science and medicine.
Collapse
Affiliation(s)
- Ryosuke L. Ohniwa
- Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
- College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Kunio Takeyasu
- Center for Biotechnology, National Taiwan University, Taipei, Taiwan
- Graduate School of Biostudies, Kyoto University, Kyoto, Japan
| | - Aiko Hibino
- Faculty of Humanities and Social Sciences, Hirosaki University, Hirosaki, Japan
| |
Collapse
|
4
|
Gallo SA, Pearce M, Lee CJ, Erosheva EA. A new approach to grant review assessments: score, then rank. Res Integr Peer Rev 2023; 8:10. [PMID: 37488628 PMCID: PMC10367367 DOI: 10.1186/s41073-023-00131-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 03/29/2023] [Indexed: 07/26/2023] Open
Abstract
BACKGROUND In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications. METHODS We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical "toy" examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges' evaluations. RESULTS For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions. CONCLUSIONS A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process.
Collapse
Affiliation(s)
- Stephen A Gallo
- American Institute of Biological Sciences, Washington D.C., United States.
| | - Michael Pearce
- Department of Statistics, University of Washington, Seattle, United States
| | - Carole J Lee
- Department of Philosophy, University of Washington, Seattle, United States
| | - Elena A Erosheva
- Department of Statistics, University of Washington, Seattle, United States
- School of Social Work, University of Washington, Seattle, United States
- Center for Statistics and the Social Sciences, University of Washington, Seattle, United States
| |
Collapse
|
5
|
Kindsiko E, Rõigas K, Niinemets Ü. Getting funded in a highly fluctuating environment: Shifting from excellence to luck and timing. PLoS One 2022; 17:e0277337. [PMID: 36342950 PMCID: PMC9639839 DOI: 10.1371/journal.pone.0277337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 10/25/2022] [Indexed: 11/09/2022] Open
Abstract
Recent data highlights the presence of luck in research grant allocations, where most vulnerable are early-career researchers. The national research funding contributes typically the greatest share of total research funding in a given country, fulfilling simultaneously the roles of promoting excellence in science, and most importantly, development of the careers of young generation of scientists. Yet, there is limited supply of studies that have investigated how do early-career researchers stand compared to advanced-career level researchers in case of a national research grant system. We analyzed the Estonian national highly competitive research grant funding across different fields of research for a ten-year-period between 2013-2022, including all the awarded grants for this period (845 grants, 658 individual principal investigators, PI). The analysis was conducted separately for early-career and advanced-career researchers. We aimed to investigate how the age, scientific productivity and the previous grant success of the PI vary across a national research system, by comparing early- and advanced-career researchers. The annual grant success rates varied between 14% and 28%, and within the discipline the success rate fluctuated across years even between 0-67%. The year-to-year fluctuations in grant success were stronger for early-career researchers. The study highlights how the seniority does not automatically deliver better research performance, at some fields, younger PIs outperform older cohorts. Also, as the size of the available annual grants fluctuates remarkably, early-career researchers are most vulnerable as they can apply for the starting grant only within a limited "time window".
Collapse
Affiliation(s)
- Eneli Kindsiko
- School of Economics and Business Administration, University of Tartu, Tartu, Estonia
| | - Kärt Rõigas
- School of Economics and Business Administration, University of Tartu, Tartu, Estonia
| | | |
Collapse
|
6
|
Shaw J. Peer review in funding-by-lottery: A systematic overview and expansion. RESEARCH EVALUATION 2022. [DOI: 10.1093/reseval/rvac022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Abstract
Despite the surging interest in introducing lottery mechanisms into decision-making procedures for science funding bodies, the discourse on funding-by-lottery remains underdeveloped and, at times, misleading. Funding-by-lottery is sometimes presented as if it were a single mechanism when, in reality, there are many funding-by-lottery mechanisms with important distinguishing features. Moreover, funding-by-lottery is sometimes portrayed as an alternative to traditional methods of peer review when peer review is still used within funding-by-lottery approaches. This obscures a proper analysis of the (hypothetical and actual) variants of funding-by-lottery and important differences amongst them. The goal of this article is to provide a preliminary taxonomy of funding-by-lottery variants and evaluate how the existing evidence on peer review might lend differentiated support for variants of funding-by-lottery. Moreover, I point to gaps in the literature on peer review that must be addressed in future research. I conclude by building off of the work of Avin in moving toward a more holistic evaluation of funding-by-lottery. Specifically, I consider implications funding-by-lottery variants may have regarding trust and social responsibility.
Collapse
Affiliation(s)
- Jamie Shaw
- Institut für Philosophie, Leibniz Universität Hannover , Hannover, Germany
| |
Collapse
|
7
|
Luo J, Feliciani T, Reinhart M, Hartstein J, Das V, Alabi O, Shankar K. Analyzing sentiments in peer review reports: Evidence from two science funding agencies. QUANTITATIVE SCIENCE STUDIES 2021. [DOI: 10.1162/qss_a_00156] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Abstract
Using a novel combination of methods and data sets from two national funding agency contexts, this study explores whether review sentiment can be used as a reliable proxy for understanding peer reviewer opinions. We measure reviewer opinions via their review sentiments on both specific review subjects and proposals’ overall funding worthiness with three different methods: manual content analysis and two dictionary-based sentiment analysis algorithms (TextBlob and VADER). The reliability of review sentiment to detect reviewer opinions is addressed by its correlation with review scores and proposals’ rankings and funding decisions. We find in our samples that review sentiments correlate with review scores or rankings positively, and the correlation is stronger for manually coded than for algorithmic results; manual and algorithmic results are overall correlated across different funding programs, review sections, languages, and agencies, but the correlations are not strong; and manually coded review sentiments can quite accurately predict whether proposals are funded, whereas the two algorithms predict funding success with moderate accuracy. The results suggest that manual analysis of review sentiments can provide a reliable proxy of grant reviewer opinions, whereas the two SA algorithms can be useful only in some specific situations.
Collapse
Affiliation(s)
- Junwen Luo
- School of Information and Communication Studies, University College Dublin, Dublin, Ireland
| | - Thomas Feliciani
- School of Sociology and Geary Institute of Public Policy, University College Dublin, Dublin, Ireland
| | - Martin Reinhart
- Robert K. Merton Center for Science Studies, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Judith Hartstein
- German Centre for Higher Education Research and Science Studies (DZHW), Berlin, Germany
- Faculty of Humanities and Social Sciences, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Vineeth Das
- School of Sociology and Geary Institute of Public Policy, University College Dublin, Dublin, Ireland
| | - Olalere Alabi
- School of Sociology and Geary Institute of Public Policy, University College Dublin, Dublin, Ireland
| | - Kalpana Shankar
- School of Information and Communication Studies, University College Dublin, Dublin, Ireland
| |
Collapse
|
8
|
Heyard R, Philipp T, Hottenrott H. Imaginary carrot or effective fertiliser? A rejoinder on funding and productivity. Scientometrics 2021. [DOI: 10.1007/s11192-021-04130-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AbstractThe question of whether and to what extent research funding enables researchers to be more productive is a crucial one. In their recent work, Mariethoz et al. (Scientometrics, 2021. 10.1007/s11192-020-03.855-1) claim that there is no significant relationship between project-based research funding and bibliometric productivity measures and conclude that this is the result of inappropriate allocation mechanisms. In this rejoinder, we argue that such claims are not supported by the data and analyses reported in the article.
Collapse
|
9
|
Cruz-Castro L, Sanz-Menendez L. What should be rewarded? Gender and evaluation criteria for tenure and promotion. J Informetr 2021. [DOI: 10.1016/j.joi.2021.101196] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
10
|
Baveye PC. Objectivity of the peer‐review process: Enduring myth, reality, and possible remedies. LEARNED PUBLISHING 2021. [DOI: 10.1002/leap.1414] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Affiliation(s)
- Philippe C. Baveye
- Université Paris‐Saclay avenue Lucien Brétignières Thiverval‐Grignon 78850 France
- Saint Loup Research Institute 7 rue des chênes, La Grande Romelière Saint Loup Lamairé 79600 France
| |
Collapse
|
11
|
Bieri M, Roser K, Heyard R, Egger M. Face-to-face panel meetings versus remote evaluation of fellowship applications: simulation study at the Swiss National Science Foundation. BMJ Open 2021; 11:e047386. [PMID: 33952554 PMCID: PMC8103360 DOI: 10.1136/bmjopen-2020-047386] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVES To trial a simplified, time and cost-saving method for remote evaluation of fellowship applications and compare this with existing panel review processes by analysing concordance between funding decisions, and the use of a lottery-based decision method for proposals of similar quality. DESIGN The study involved 134 junior fellowship proposals for postdoctoral research ('Postdoc.Mobility'). The official method used two panel reviewers who independently scored the application, followed by triage and discussion of selected applications in a panel. Very competitive/uncompetitive proposals were directly funded/rejected without discussion. The simplified procedure used the scores of the two panel members, with or without the score of an additional, third expert. Both methods could further use a lottery to decide on applications of similar quality close to the funding threshold. The same funding rate was applied, and the agreement between the two methods analysed. SETTING Swiss National Science Foundation (SNSF). PARTICIPANTS Postdoc.Mobility panel reviewers and additional expert reviewers. PRIMARY OUTCOME MEASURE Per cent agreement between the simplified and official evaluation method with 95% CIs. RESULTS The simplified procedure based on three reviews agreed in 80.6% (95% CI: 73.9% to 87.3%) of applicants with the official funding outcome. The agreement was 86.6% (95% CI: 80.6% to 91.8%) when using the two reviews of the panel members. The agreement between the two methods was lower for the group of applications discussed in the panel (64.2% and 73.1%, respectively), and higher for directly funded/rejected applications (range: 96.7%-100%). The lottery was used in 8 (6.0%) of 134 applications (official method), 19 (14.2%) applications (simplified, three reviewers) and 23 (17.2%) applications (simplified, two reviewers). With the simplified procedure, evaluation costs could have been halved and 31 hours of meeting time saved for the two 2019 calls. CONCLUSION Agreement between the two methods was high. The simplified procedure could represent a viable evaluation method for the Postdoc.Mobility early career instrument at the SNSF.
Collapse
Affiliation(s)
- Marco Bieri
- Careers Division, Swiss National Science Foundation, Bern, Switzerland
| | - Katharina Roser
- Careers Division, Swiss National Science Foundation, Bern, Switzerland
- Department of Health Sciences and Medicine, University of Lucerne, Lucerne, Switzerland
| | - Rachel Heyard
- Data Team, Swiss National Science Foundation, Bern, Switzerland
| | - Matthias Egger
- Institute of Social & Preventive Medicine, University of Bern, Bern, Switzerland
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- Research Council, Swiss National Science Foundation, Bern, Switzerland
| |
Collapse
|
12
|
Seeber M, Vlegels J, Reimink E, Marušić A, Pina DG. Does reviewing experience reduce disagreement in proposals evaluation? Insights from Marie Skłodowska-Curie and COST Actions. RESEARCH EVALUATION 2021. [DOI: 10.1093/reseval/rvab011] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Abstract
We have limited understanding of why reviewers tend to strongly disagree when scoring the same research proposal. Thus far, research that explored disagreement has focused on the characteristics of the proposal or the applicants, while ignoring the characteristics of the reviewers themselves. This article aims to address this gap by exploring which reviewer characteristics most affect disagreement among reviewers. We present hypotheses regarding the effect of a reviewer’s level of experience in evaluating research proposals for a specific granting scheme, that is, scheme reviewing experience. We test our hypotheses by studying two of the most important research funding programmes in the European Union from 2014 to 2018, namely, 52,488 proposals evaluated under three funding schemes of the Horizon 2020 Marie Sklodowska-Curie Actions (MSCA), and 1,939 proposals evaluated under the European Cooperation in Science and Technology Actions. We find that reviewing experience on previous calls of a specific scheme significantly reduces disagreement, while experience of evaluating proposals in other schemes—namely, general reviewing experience, does not have any effect. Moreover, in MSCA—Individual Fellowships, we observe an inverted U relationship between the number of proposals a reviewer evaluates in a given call and disagreement, with a remarkable decrease in disagreement above 13 evaluated proposals. Our results indicate that reviewing experience in a specific scheme improves reliability, curbing unwarranted disagreement by fine-tuning reviewers’ evaluation.
Collapse
Affiliation(s)
- Marco Seeber
- Department of Political Science and Management, University of Agder, Kristiansand, Norway
| | - Jef Vlegels
- Department of Sociology, Ghent University, Ghent, Belgium
| | - Elwin Reimink
- European Cooperation in Science and Technology (COST), Brussels, Belgium
| | - Ana Marušić
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
| | - David G Pina
- Research Executive Agency, European Commission, Brussels, Belgium
| |
Collapse
|
13
|
Pina DG, Buljan I, Hren D, Marušić A. A retrospective analysis of the peer review of more than 75,000 Marie Curie proposals between 2007 and 2018. eLife 2021; 10:59338. [PMID: 33439120 PMCID: PMC7806263 DOI: 10.7554/elife.59338] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 12/30/2020] [Indexed: 11/26/2022] Open
Abstract
Most funding agencies rely on peer review to evaluate grant applications and proposals, but research into the use of this process by funding agencies has been limited. Here we explore if two changes to the organization of peer review for proposals submitted to various funding actions by the European Union has an influence on the outcome of the peer review process. Based on an analysis of more than 75,000 applications to three actions of the Marie Curie programme over a period of 12 years, we find that the changes – a reduction in the number of evaluation criteria used by reviewers and a move from in-person to virtual meetings – had little impact on the outcome of the peer review process. Our results indicate that other factors, such as the type of grant or area of research, have a larger impact on the outcome.
Collapse
Affiliation(s)
- David G Pina
- Research Executive Agency, European Commission, Brussels, Belgium
| | - Ivan Buljan
- Department for Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
| | - Darko Hren
- Department of Psychology, University of Split School of Humanities and Social Sciences, Split, Croatia
| | - Ana Marušić
- Department for Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
| |
Collapse
|
14
|
Horbach SPJM. No time for that now! Qualitative changes in manuscript peer review during the Covid-19 pandemic. RESEARCH EVALUATION 2021. [PMCID: PMC7928627 DOI: 10.1093/reseval/rvaa037] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Abstract
The global Covid-19 pandemic has had a considerable impact on the scientific enterprise, including scholarly publication and peer-review practices. Several studies have assessed these impacts, showing among others that medical journals have strongly accelerated their review processes for Covid-19-related content. This has raised questions and concerns regarding the quality of the review process and the standards to which manuscripts are held for publication. To address these questions, this study sets out to assess qualitative differences in review reports and editorial decision letters for Covid-19 related, articles not related to Covid-19 published during the 2020 pandemic, and articles published before the pandemic. It employs the open peer-review model at the British Medical Journal and eLife to study the content of review reports, editorial decisions, author responses, and open reader comments. It finds no clear differences between the review processes of articles not related to Covid-19 published during or before the pandemic. However, it does find notable diversity between Covid-19 and non-Covid-19-related articles, including fewer requests for additional experiments, more cooperative comments, and different suggestions to address too strong claims. In general, the findings suggest that both reviewers and journal editors implicitly and explicitly use different quality criteria to assess Covid-19-related manuscripts, hence transforming science’s main evaluation mechanism for their underlying studies and potentially affecting their public dissemination.
Collapse
Affiliation(s)
- Serge P J M Horbach
- Department of Political Sciences, Danish Centre for Studies in Research and Research Policy, Aarhus University, Bartholins Allé 7, Aarhus C, 8000, Denmark
- Faculty of Social Sciences, Centre for Science and Technology Studies (CWTS), Faculty of Social Sciences, Leiden University, Wassenaarseweg 62A, AL Leiden 2333, The Netherlands
| |
Collapse
|
15
|
Abstract
Abstract
Metrics on scientific publications and their citations are easily accessible and are often referred to in assessments of research and researchers. This paper addresses whether metrics are considered a legitimate and integral part of such assessments. Based on an extensive questionnaire survey in three countries, the opinions of researchers are analysed. We provide comparisons across academic fields (cardiology, economics, and physics) and contexts for assessing research (identifying the best research in their field, assessing grant proposals and assessing candidates for positions). A minority of the researchers responding to the survey reported that metrics were reasons for considering something to be the best research. Still, a large majority in all the studied fields indicated that metrics were important or partly important in their review of grant proposals and assessments of candidates for academic positions. In these contexts, the citation impact of the publications and, particularly, the number of publications were emphasized. These findings hold across all fields analysed, still the economists relied more on productivity measures than the cardiologists and the physicists. Moreover, reviewers with high scores on bibliometric indicators seemed more frequently (than other reviewers) to adhere to metrics in their assessments. Hence, when planning and using peer review, one should be aware that reviewers—in particular reviewers who score high on metrics—find metrics to be a good proxy for the future success of projects and candidates, and rely on metrics in their evaluation procedures despite the concerns in scientific communities on the use and misuse of publication metrics.
Collapse
Affiliation(s)
- Liv Langfeldt
- Nordic Institute for Studies in Innovation, Research and Education (NIFU), P.O. Box 2815 Tøyen, N- 0608 Oslo, Norway
| | - Ingvild Reymert
- Nordic Institute for Studies in Innovation, Research and Education (NIFU), P.O. Box 2815 Tøyen, N- 0608 Oslo, Norway
| | - Dag W Aksnes
- Nordic Institute for Studies in Innovation, Research and Education (NIFU), P.O. Box 2815 Tøyen, N- 0608 Oslo, Norway
| |
Collapse
|
16
|
Madsen EB, Aagaard K. Concentration of Danish research funding on individual researchers and research topics: Patterns and potential drivers. QUANTITATIVE SCIENCE STUDIES 2020. [DOI: 10.1162/qss_a_00077] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
The degree of concentration in research funding has long been a principal matter of contention in science policy. Strong concentration has been seen as a tool for optimizing and focusing research investments but also as a damaging path towards hypercompetition, diminished diversity, and conservative topic selection. While several studies have documented funding concentration linked to individual funding organizations, few have looked at funding concentration from a systemic perspective. In this article, we examine nearly 20,000 competitive grants allocated by 15 major Danish research funders. Our results show a strongly skewed allocation of funding towards a small elite of individual researchers, and towards a select group of research areas and topics. We discuss potential drivers and highlight that funding concentration likely results from a complex interplay between funders’ overlapping priorities, excellence-dominated evaluation criteria, and lack of coordination between both public and private research funding bodies.
Collapse
Affiliation(s)
- Emil Bargmann Madsen
- Danish Centre for Studies in Research and Research Policy, Department of Political Science, Aarhus University, Bartholins allé 7, 8000 Aarhus C
| | - Kaare Aagaard
- Danish Centre for Studies in Research and Research Policy, Department of Political Science, Aarhus University, Bartholins allé 7, 8000 Aarhus C
| |
Collapse
|
17
|
Gallo SA, Schmaling KB, Thompson LA, Glisson SR. Grant reviewer perceptions of the quality, effectiveness, and influence of panel discussion. Res Integr Peer Rev 2020; 5:7. [PMID: 32467777 PMCID: PMC7229595 DOI: 10.1186/s41073-020-00093-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 04/24/2020] [Indexed: 12/05/2022] Open
Abstract
Background Funding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated. Methods Here, we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality, effectiveness, and influence of panel discussion from their last peer review experience. Results Reviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively. Conclusions It is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.
Collapse
Affiliation(s)
- Stephen A Gallo
- 1Scientific Peer Advisory and Review Services, American Institute of Biological Sciences, Herndon, VA USA
| | | | - Lisa A Thompson
- 1Scientific Peer Advisory and Review Services, American Institute of Biological Sciences, Herndon, VA USA
| | - Scott R Glisson
- 1Scientific Peer Advisory and Review Services, American Institute of Biological Sciences, Herndon, VA USA
| |
Collapse
|
18
|
|
19
|
How do journals of different rank instruct peer reviewers? Reviewer guidelines in the field of management. Scientometrics 2020. [DOI: 10.1007/s11192-019-03343-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
20
|
Bedessem B. Should we fund research randomly? An epistemological criticism of the lottery model as an alternative to peer review for the funding of science. RESEARCH EVALUATION 2019. [DOI: 10.1093/reseval/rvz034] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Abstract
The way research is, and should be, funded by the public sphere is the subject of renewed interest for sociology, economics, management sciences, and more recently, for the philosophy of science. In this contribution, I propose a qualitative, epistemological criticism of the funding by lottery model, which is advocated by a growing number of scholars as an alternative to peer review. This lottery scheme draws on the lack of efficiency and of robustness of the peer-review-based evaluation to argue that the majority of public resources for basic science should be allocated randomly. I first differentiate between two distinct arguments used to defend this alternative funding scheme based on considerations about the logic of scientific research. To assess their epistemological limits, I then present and develop a conceptual frame, grounded on the notion of ‘system of practice’, which can be used to understand what precisely it means, for a research project, to be interesting or significant. I use this epistemological analysis to show that the lottery model is not theoretically optimal, since it underestimates the integration of all scientific projects in densely interconnected systems of conceptual, experimental, or technical practices which confer their proper interest to them. I also apply these arguments in order to criticize the classical peer-review process. I finally suggest, as a discussion, that some recently proposed models that bring to the fore a principle of decentralization of the evaluation and selection process may constitute a better alternative, if the practical conditions of their implementation are adequately settled.
Collapse
Affiliation(s)
- Baptiste Bedessem
- Laboratoire IRPHIL, Université Lyon 3 (Faculté de Philosophie), 18 rue Chevreul, Lyon 69007, France
| |
Collapse
|
21
|
Street C, Ward KW. Cognitive Bias in the Peer Review Process. DATA BASE FOR ADVANCES IN INFORMATION SYSTEMS 2019. [DOI: 10.1145/3371041.3371046] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
In a recent critique of reviewers, Ralph (2016) stated that "Peer review is prejudiced, capricious, inefficient, ineffective and generally unscientific" (p. 274). Our research proposes that one way the peer review process could appear flawed is if those involved had different beliefs about what was important in evaluating research. We found evidence for a cognitive bias where respondents to a survey asking about the importance of particular validity and reliability method practices gave different answers depending on whether they were asked to answer the survey as a researcher or as a reviewer. Because researchers have higher motivation to publish research than reviewers do to review research, we theorize that motivational differences between researchers and reviewers leads to this bias and contributes to the perception that the review process is flawed. We discuss the implications of our findings for improving the peer review process in MIS.
Collapse
|
22
|
|
23
|
|
24
|
Smaldino PE, Turner MA, Contreras Kallens PA. Correction to 'Open science and modified funding lotteries can impede the natural selection of bad science'. ROYAL SOCIETY OPEN SCIENCE 2019; 6:191249. [PMID: 31543978 PMCID: PMC6731693 DOI: 10.1098/rsos.191249] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
[This corrects the article DOI: 10.1098/rsos.190194.].
Collapse
|
25
|
Smaldino PE, Turner MA, Contreras Kallens PA. Open science and modified funding lotteries can impede the natural selection of bad science. ROYAL SOCIETY OPEN SCIENCE 2019; 6:190194. [PMID: 31417725 PMCID: PMC6689639 DOI: 10.1098/rsos.190194] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 06/04/2019] [Indexed: 06/10/2023]
Abstract
Assessing scientists using exploitable metrics can lead to the degradation of research methods even without any strategic behaviour on the part of individuals, via 'the natural selection of bad science.' Institutional incentives to maximize metrics like publication quantity and impact drive this dynamic. Removing these incentives is necessary, but institutional change is slow. However, recent developments suggest possible solutions with more rapid onsets. These include what we call open science improvements, which can reduce publication bias and improve the efficacy of peer review. In addition, there have been increasing calls for funders to move away from prestige- or innovation-based approaches in favour of lotteries. We investigated whether such changes are likely to improve the reproducibility of science even in the presence of persistent incentives for publication quantity through computational modelling. We found that modified lotteries, which allocate funding randomly among proposals that pass a threshold for methodological rigour, effectively reduce the rate of false discoveries, particularly when paired with open science improvements that increase the publication of negative results and improve the quality of peer review. In the absence of funding that targets rigour, open science improvements can still reduce false discoveries in the published literature but are less likely to improve the overall culture of research practices that underlie those publications.
Collapse
Affiliation(s)
- Paul E. Smaldino
- Department of Cognitive and Information Sciences, University of California, Merced, CA, USA
| | - Matthew A. Turner
- Department of Cognitive and Information Sciences, University of California, Merced, CA, USA
| | | |
Collapse
|
26
|
Correlations between submission and acceptance of papers in peer review journals. Scientometrics 2019. [DOI: 10.1007/s11192-019-03026-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
27
|
Kirman C, Simon T, Hays S. Science peer review for the 21st century: Assessing scientific consensus for decision-making while managing conflict of interests, reviewer and process bias. Regul Toxicol Pharmacol 2019; 103:73-85. [DOI: 10.1016/j.yrtph.2019.01.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Revised: 11/23/2018] [Accepted: 01/02/2019] [Indexed: 12/22/2022]
|
28
|
Who Is (Likely) Peer-Reviewing Your Papers? A Partial Insight into the World’s Top Reviewers. PUBLICATIONS 2019. [DOI: 10.3390/publications7010015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Scientific publishing is experiencing unprecedented growth in terms of outputs across all fields. Inevitably this creates pressure throughout the system on a number of entities. One key element is represented by peer-reviewers, whose demand increases at an even higher pace than that of publications, since more than one reviewer per paper is needed and not all papers that get reviewed get published. The relatively recent Publons platform allows for unprecedented insight into the usual ‘blindness’ of the peer-review system. At a time where the world’s top peer-reviewers are announced and celebrated, we have taken a step back in order to attempt a partial mapping of their profiles to identify trends and key dimensions of this community of ‘super-reviewers’. This commentary focuses necessarily on a limited sample due to manual processing of data, which needs to be done within a single day for the type of information we seek. In investigating the numbers of performed reviews vs. academic citations, our analysis suggests that most reviews are carried out by relatively inexperienced academics. For some of these early career academics, peer-reviewing seems to be the only activity they engage with, given the high number of reviews performed (e.g., three manuscripts per day) and the lack of outputs (zero academic papers and citations in some cases). Additionally, the world’s top researchers (i.e., highly-cited researchers) are understandably busy with research activities and therefore far less active in peer-reviewing. Lastly, there seems to be an uneven distribution at a national level between scientific outputs (e.g., publications) and reviews performed. Our analysis contributes to the ongoing global discourse on the health of scientific peer-review, and it raises some important questions for further discussion.
Collapse
|
29
|
Jacob MA. Under repair: A publication ethics and research record in the making. SOCIAL STUDIES OF SCIENCE 2019; 49:77-101. [PMID: 30654711 DOI: 10.1177/0306312718824663] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Based on fieldwork in the Committee on Publication Ethics, this paper offers an analysis of the forms of doings that publication ethics in action can take during what is called the 'Forum', a space where allegations of dubious research conduct get aired and debated between editors and publishers. This article examines recurring motifs within the review of publication practices whose ethics are called into. These motifs include: the shaping of publication ethics as an expertise that can be standardized across locations and disciplines, the separation of the research record from relations that produce it, and the divisibility of the scientific paper. Together these institute an ethics of repair at the centre of the curative enterprise of the Committee on Publication Ethics. Under the language of correcting the literature the members are working out, along with authors, what the research record should be and, inevitably, what it is. In turn, this article elicits new analytical objects that re-describe publication ethics as a form of expertise, beyond (and despite) the rehearsed axioms of this now well-established professional field.
Collapse
|
30
|
Teplitskiy M, Acuna D, Elamrani-Raoult A, Körding K, Evans J. The sociology of scientific validity: How professional networks shape judgement in peer review. RESEARCH POLICY 2018. [DOI: 10.1016/j.respol.2018.06.014] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
31
|
Hoffman SJ, Ottersen T, Tejpar A, Baral P, Fafard P. Towards a Systematic Understanding of How to Institutionally Design Scientific Advisory Committees: A Conceptual Framework and Introduction to a Special Journal Issue. GLOBAL CHALLENGES (HOBOKEN, NJ) 2018; 2:1800020. [PMID: 30345073 PMCID: PMC6175373 DOI: 10.1002/gch2.201800020] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 08/08/2018] [Indexed: 06/08/2023]
Abstract
Scientifically-derived insights are often held as requirements for defensible policy choices. Scientific advisory committees (SACs) figure prominently in this landscape, often with the promise of bringing scientific evidence to decision-makers. Yet, there is sparse and scattered knowledge about what institutional features influence the operations and effectiveness of SACs, how these design choices influence subsequent decision-making, and the lessons learned from their application. The consequences of these knowledge gaps are that SACs may not be functioning as effectively as possible. The articles in this special journal issue of Global Challenges bring together insights from experts across several disciplines, all of whom are committed to improving SACs' effectiveness worldwide. The aim of the special issue is to inform future SAC design in order to help maximize the application of high-quality scientific research for the decisions of policymakers, practitioners, and the public alike. In addition to providing an overview of the special issue and a summary of each article within it, this introductory essay presents a definition of SACs and a conceptual framework for how different institutional features and contextual factors affect three proximal determinants of SACs' effectiveness, namely the quality of advice offered, the relevance of that advice, and its legitimacy.
Collapse
Affiliation(s)
- Steven J. Hoffman
- Global Strategy LabYork UniversityUniversity of OttawaOttawaOntarioCanada
- Dahdaleh Institute for Global Health ResearchFaculty of Health and Osgoode Hall Law SchoolYork UniversityTorontoOntarioCanada
| | - Trygve Ottersen
- Global Strategy LabYork UniversityUniversity of OttawaOttawaOntarioCanada
- Division for Health ServicesNorwegian Institute of Public HealthOsloNorway
| | - Ali Tejpar
- Global Strategy LabYork UniversityUniversity of OttawaOttawaOntarioCanada
| | - Prativa Baral
- Global Strategy LabYork UniversityUniversity of OttawaOttawaOntarioCanada
- Dahdaleh Institute for Global Health ResearchFaculty of Health and Osgoode Hall Law SchoolYork UniversityTorontoOntarioCanada
| | - Patrick Fafard
- Global Strategy LabYork UniversityUniversity of OttawaOttawaOntarioCanada
- Graduate School of Public & International AffairsUniversity of OttawaOttawaOntarioCanada
| |
Collapse
|
32
|
Gallo SA, Glisson SR. External Tests of Peer Review Validity Via Impact Measures. Front Res Metr Anal 2018. [DOI: 10.3389/frma.2018.00022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
33
|
Sikdar S, Tehria P, Marsili M, Ganguly N, Mukherjee A. On the effectiveness of the scientific peer-review system: a case study of the Journal of High Energy Physics. INTERNATIONAL JOURNAL ON DIGITAL LIBRARIES 2018. [DOI: 10.1007/s00799-018-0247-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
34
|
Abstract
A classic thesis is that scientific achievement exhibits a "Matthew effect": Scientists who have previously been successful are more likely to succeed again, producing increasing distinction. We investigate to what extent the Matthew effect drives the allocation of research funds. To this end, we assembled a dataset containing all review scores and funding decisions of grant proposals submitted by recent PhDs in a €2 billion granting program. Analyses of review scores reveal that early funding success introduces a growing rift, with winners just above the funding threshold accumulating more than twice as much research funding (€180,000) during the following eight years as nonwinners just below it. We find no evidence that winners' improved funding chances in subsequent competitions are due to achievements enabled by the preceding grant, which suggests that early funding itself is an asset for acquiring later funding. Surprisingly, however, the emergent funding gap is partly created by applicants, who, after failing to win one grant, apply for another grant less often.
Collapse
|
35
|
Low agreement among reviewers evaluating the same NIH grant applications. Proc Natl Acad Sci U S A 2018; 115:2952-2957. [PMID: 29507248 DOI: 10.1073/pnas.1714379115] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers' evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers' ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers "translated" a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.
Collapse
|
36
|
|
37
|
Coveney J, Herbert DL, Hill K, Mow KE, Graves N, Barnett A. 'Are you siding with a personality or the grant proposal?': observations on how peer review panels function. Res Integr Peer Rev 2017; 2:19. [PMID: 29451548 PMCID: PMC5803633 DOI: 10.1186/s41073-017-0043-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2017] [Accepted: 10/23/2017] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND In Australia, the peer review process for competitive funding is usually conducted by a peer review group in conjunction with prior assessment from external assessors. This process is quite mysterious to those outside it. The purpose of this research was to throw light on grant review panels (sometimes called the 'black box') through an examination of the impact of panel procedures, panel composition and panel dynamics on the decision-making in the grant review process. A further purpose was to compare experience of a simplified review process with more conventional processes used in assessing grant proposals in Australia. METHODS This project was one aspect of a larger study into the costs and benefits of a simplified peer review process. The Queensland University of Technology (QUT)-simplified process was compared with the National Health and Medical Research Council's (NHMRC) more complex process. Grant review panellists involved in both processes were interviewed about their experience of the decision-making process that assesses the excellence of an application. All interviews were recorded and transcribed. Each transcription was de-identified and returned to the respondent for review. Final transcripts were read repeatedly and coded, and similar codes were amalgamated into categories that were used to build themes. Final themes were shared with the research team for feedback. RESULTS Two major themes arose from the research: (1) assessing grant proposals and (2) factors influencing the fairness, integrity and objectivity of review. Issues such as the quality of writing in a grant proposal, comparison of the two review methods, the purpose and use of the rebuttal, assessing the financial value of funded projects, the importance of the experience of the panel membership and the role of track record and the impact of group dynamics on the review process were all discussed. The research also examined the influence of research culture on decision-making in grant review panels. One of the aims of this study was to compare a simplified review process with more conventional processes. Generally, participants were supportive of the simplified process. CONCLUSIONS Transparency in the grant review process will result in better appreciation of the outcome. Despite the provision of clear guidelines for peer review, reviewing processes are likely to be subjective to the extent that different reviewers apply different rules. The peer review process will come under more scrutiny as funding for research becomes even more competitive. There is justification for further research on the process, especially of a kind that taps more deeply into the 'black box' of peer review.
Collapse
Affiliation(s)
- John Coveney
- College of Nursing and Health Sciences, Flinders University, Adelaide, Australia
| | - Danielle L Herbert
- School of Public Health, Social Work & Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia
| | - Kathy Hill
- School of Nursing and Midwifery, University of South Australia, Adelaide, Australia
| | | | - Nicholas Graves
- School of Public Health, Social Work & Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia
| | - Adrian Barnett
- School of Public Health, Social Work & Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia
| |
Collapse
|
38
|
Jirschitzka J, Oeberst A, Göllner R, Cress U. Inter-rater reliability and validity of peer reviews in an interdisciplinary field. Scientometrics 2017. [DOI: 10.1007/s11192-017-2516-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
39
|
Abstract
BACKGROUND Peer review decisions award >95% of academic medical research funding, so it is crucial to understand how well they work and if they could be improved. METHODS This paper summarises evidence from 105 relevant papers identified through a literature search on the effectiveness and burden of peer review for grant funding. RESULTS There is a remarkable paucity of evidence about the overall efficiency of peer review for funding allocation, given its centrality to the modern system of science. From the available evidence, we can identify some conclusions around the effectiveness and burden of peer review. The strongest evidence around effectiveness indicates a bias against innovative research. There is also fairly clear evidence that peer review is, at best, a weak predictor of future research performance, and that ratings vary considerably between reviewers. There is some evidence of age bias and cronyism. Good evidence shows that the burden of peer review is high and that around 75% of it falls on applicants. By contrast, many of the efforts to reduce burden are focused on funders and reviewers/panel members. CONCLUSIONS We suggest funders should acknowledge, assess and analyse the uncertainty around peer review, even using reviewers' uncertainty as an input to funding decisions. Funders could consider a lottery element in some parts of their funding allocation process, to reduce both burden and bias, and allow better evaluation of decision processes. Alternatively, the distribution of scores from different reviewers could be better utilised as a possible way to identify novel, innovative research. Above all, there is a need for open, transparent experimentation and evaluation of different ways to fund research. This also requires more openness across the wider scientific community to support such investigations, acknowledging the lack of evidence about the primacy of the current system and the impossibility of achieving perfection.
Collapse
Affiliation(s)
- Susan Guthrie
- RAND Europe, Westbrook Centre, Milton Road, Cambridge, UK
| | - Ioana Ghiga
- RAND Europe, Westbrook Centre, Milton Road, Cambridge, UK
| | - Steven Wooding
- Centre for Science and Policy, University of Cambridge, Cambridge, UK
| |
Collapse
|
40
|
Abstract
Background: Peer review decisions award an estimated >95% of academic medical research funding, so it is crucial to understand how well they work and if they could be improved. Methods: This paper summarises evidence from 105 papers identified through a literature search on the effectiveness and burden of peer review for grant funding. Results: There is a remarkable paucity of evidence about the efficiency of peer review for funding allocation, given its centrality to the modern system of science. From the available evidence, we can identify some conclusions around the effectiveness and burden of peer review. The strongest evidence around effectiveness indicates a bias against innovative research. There is also fairly clear evidence that peer review is, at best, a weak predictor of future research performance, and that ratings vary considerably between reviewers. There is some evidence of age bias and cronyism. Good evidence shows that the burden of peer review is high and that around 75% of it falls on applicants. By contrast, many of the efforts to reduce burden are focused on funders and reviewers/panel members. Conclusions: We suggest funders should acknowledge, assess and analyse the uncertainty around peer review, even using reviewers' uncertainty as an input to funding decisions. Funders could consider a lottery element in some parts of their funding allocation process, to reduce both burden and bias, and allow better evaluation of decision processes. Alternatively, the distribution of scores from different reviewers could be better utilised as a possible way to identify novel, innovative research. Above all, there is a need for open, transparent experimentation and evaluation of different ways to fund research. This also requires more openness across the wider scientific community to support such investigations, acknowledging the lack of evidence about the primacy of the current system and the impossibility of achieving perfection.
Collapse
Affiliation(s)
- Susan Guthrie
- RAND Europe, Westbrook Centre, Milton Road, Cambridge, UK
| | - Ioana Ghiga
- RAND Europe, Westbrook Centre, Milton Road, Cambridge, UK
| | - Steven Wooding
- Centre for Science and Policy, University of Cambridge, Cambridge, UK
| |
Collapse
|
41
|
Are peer-review activities related to reviewer bibliometric performance? A scientometric analysis of Publons. Scientometrics 2017. [DOI: 10.1007/s11192-017-2399-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
42
|
Pier EL, Raclaw J, Kaatz A, Brauer M, Carnes M, Nathan MJ, Ford CE. 'Your comments are meaner than your score': score calibration talk influences intra- and inter-panel variability during scientific grant peer review. RESEARCH EVALUATION 2017; 26:1-14. [PMID: 28458466 DOI: 10.1093/reseval/rvw025] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
In scientific grant peer review, groups of expert scientists meet to engage in the collaborative decision-making task of evaluating and scoring grant applications. Prior research on grant peer review has established that inter-reviewer reliability is typically poor. In the current study, experienced reviewers for the National Institutes of Health (NIH) were recruited to participate in one of four constructed peer review panel meetings. Each panel discussed and scored the same pool of recently reviewed NIH grant applications. We examined the degree of intra-panel variability in panels' scores of the applications before versus after collaborative discussion, and the degree of inter-panel variability. We also analyzed videotapes of reviewers' interactions for instances of one particular form of discourse-Score Calibration Talk-as one factor influencing the variability we observe. Results suggest that although reviewers within a single panel agree more following collaborative discussion, different panels agree less after discussion, and Score Calibration Talk plays a pivotal role in scoring variability during peer review. We discuss implications of this variability for the scientific peer review process.
Collapse
Affiliation(s)
- Elizabeth L Pier
- Center for Women's Health Research, University of Wisconsin-Madison, 700 Regent Street, Ste. 301, Madison, WI 53715, USA.,Department of Educational Psychology, University of Wisconsin-Madison, 1025 West Johnson Street, Madison, WI 53706, USA
| | - Joshua Raclaw
- Center for Women's Health Research, University of Wisconsin-Madison, 700 Regent Street, Ste. 301, Madison, WI 53715, USA.,Department of English, West Chester University, 700 South High Street, West Chester, PA 19383, USA
| | - Anna Kaatz
- Center for Women's Health Research, University of Wisconsin-Madison, 700 Regent Street, Ste. 301, Madison, WI 53715, USA
| | - Markus Brauer
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706, USA
| | - Molly Carnes
- Center for Women's Health Research, University of Wisconsin-Madison, 700 Regent Street, Ste. 301, Madison, WI 53715, USA
| | - Mitchell J Nathan
- Department of Educational Psychology, University of Wisconsin-Madison, 1025 West Johnson Street, Madison, WI 53706, USA
| | - Cecilia E Ford
- Center for Women's Health Research, University of Wisconsin-Madison, 700 Regent Street, Ste. 301, Madison, WI 53715, USA.,Department of English, University of Wisconsin-Madison, 600 North Park Street, Madison, WI 53706, USA
| |
Collapse
|
43
|
Roumbanis L. Academic judgments under uncertainty: A study of collective anchoring effects in Swedish Research Council panel groups. SOCIAL STUDIES OF SCIENCE 2017; 47:95-116. [PMID: 28195028 DOI: 10.1177/0306312716659789] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This article focuses on anchoring effects in the process of peer reviewing research proposals. Anchoring effects are commonly seen as the result of flaws in human judgment, as cognitive biases that stem from specific heuristics that guide people when they involve their intuition in solving a problem. Here, the cognitive biases will be analyzed from a sociological point of view, as interactional and aggregated phenomena. The article is based on direct observations of ten panel groups evaluating research proposals in the natural and engineering sciences for the Swedish Research Council. The analysis suggests that collective anchoring effects emerge as a result of the combination of the evaluation techniques that are being used (grading scales and average ranking) and the efforts of the evaluators to reach consensus in the face of disagreements and uncertainty in the group. What many commentators and evaluators have interpreted as an element of chance in the peer review process may also be understood as partly a result of the dynamic aspects of collective anchoring effects.
Collapse
Affiliation(s)
- Lambros Roumbanis
- Stockholm Centre for Organizational Research (Score), Stockholm University, Stockholm, Sweden
| |
Collapse
|
44
|
Gallo SA, Sullivan JH, Glisson SR. The Influence of Peer Reviewer Expertise on the Evaluation of Research Funding Applications. PLoS One 2016; 11:e0165147. [PMID: 27768760 PMCID: PMC5074495 DOI: 10.1371/journal.pone.0165147] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Accepted: 10/09/2016] [Indexed: 11/18/2022] Open
Abstract
Although the scientific peer review process is crucial to distributing research investments, little has been reported about the decision-making processes used by reviewers. One key attribute likely to be important for decision-making is reviewer expertise. Recent data from an experimental blinded review utilizing a direct measure of expertise has found that closer intellectual distances between applicant and reviewer lead to harsher evaluations, possibly suggesting that information is differentially sampled across subject-matter expertise levels and across information type (e.g. strengths or weaknesses). However, social and professional networks have been suggested to play a role in reviewer scoring. In an effort to test whether this result can be replicated in a real-world unblinded study utilizing self-assessed reviewer expertise, we conducted a retrospective multi-level regression analysis of 1,450 individual unblinded evaluations of 725 biomedical research funding applications by 1,044 reviewers. Despite the large variability in the scoring data, the results are largely confirmatory of work from blinded reviews, by which a linear relationship between reviewer expertise and their evaluations was observed-reviewers with higher levels of self-assessed expertise tended to be harsher in their evaluations. However, we also found that reviewer and applicant seniority could influence this relationship, suggesting social networks could have subtle influences on reviewer scoring. Overall, these results highlight the need to explore how reviewers utilize their expertise to gather and weight information from the application in making their evaluations.
Collapse
Affiliation(s)
- Stephen A. Gallo
- Scientific Peer Advisory and Review Services Division, American Institute of Biological Sciences, Reston, Virginia, United States of America
- * E-mail:
| | - Joanne H. Sullivan
- Scientific Peer Advisory and Review Services Division, American Institute of Biological Sciences, Reston, Virginia, United States of America
| | - Scott R. Glisson
- Scientific Peer Advisory and Review Services Division, American Institute of Biological Sciences, Reston, Virginia, United States of America
| |
Collapse
|
45
|
Boudreau KJ, Guinan EC, Lakhani KR, Riedl C. Looking Across and Looking Beyond the Knowledge Frontier: Intellectual Distance, Novelty, and Resource Allocation in Science. MANAGEMENT SCIENCE 2016; 62:2765-2783. [PMID: 27746512 PMCID: PMC5062254 DOI: 10.1287/mnsc.2015.2285] [Citation(s) in RCA: 101] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Selecting among alternative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the "intellectual distance" between the knowledge embodied in research proposals and an evaluator's own expertise systematically relates to the evaluations given. To estimate relationships, we designed and executed a grant proposal process at a leading research university in which we randomized the assignment of evaluators and proposals to generate 2,130 evaluator-proposal pairs. We find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel. The patterns are consistent with biases associated with boundedly rational evaluation of new ideas. The patterns are inconsistent with intellectual distance simply contributing "noise" or being associated with private interests of evaluators. We discuss implications for policy, managerial intervention, and allocation of resources in the ongoing accumulation of scientific knowledge.
Collapse
Affiliation(s)
- Kevin J. Boudreau
- London Business School, London NW1 4SA, United Kingdom; and Harvard Business School, Boston, Massachusetts 02163
| | - Eva C. Guinan
- Dana-Farber/Harvard Cancer Center, Boston, Massachusetts 02215
| | | | - Christoph Riedl
- D’Amore-McKim School of Business, Northeastern University, Boston, Massachusetts 02115
| |
Collapse
|
46
|
Bollen J, Crandall D, Junk D, Ding Y, Börner K. An efficient system to fund science: from proposal review to peer-to-peer distributions. Scientometrics 2016; 110:521-528. [PMID: 29795961 DOI: 10.1007/s11192-016-2110-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
This paper presents a novel model of science funding that exploits the wisdom of the scientific crowd. Each researcher receives an equal, unconditional part of all available science funding on a yearly basis, but is required to individually donate to other scientists a given fraction of all they receive. Science funding thus moves from one scientist to the next in such a way that scientists who receive many donations must also redistribute the most. As the funding circulates through the scientific community it is mathematically expected to converge on a funding distribution favored by the entire scientific community. This is achieved without any proposal submissions or reviews. The model furthermore funds scientists instead of projects, reducing much of the overhead and bias of the present grant peer review system. Model validation using large-scale citation data and funding records over the past 20 years show that the proposed model could yield funding distributions that are similar to those of the NSF and NIH, and the model could potentially be more fair and more equitable. We discuss possible extensions of this approach as well as science policy implications.
Collapse
Affiliation(s)
- Johan Bollen
- School of Informatics and Computing, Indiana University, Bloomington, IN, USA.,Indiana University Network Institute, Indiana University, Bloomington, IN, USA.,Center for Complex Network and Systems Research, Indiana University, Bloomington, IN, USA
| | - David Crandall
- School of Informatics and Computing, Indiana University, Bloomington, IN, USA.,Center for Complex Network and Systems Research, Indiana University, Bloomington, IN, USA
| | - Damion Junk
- School of Informatics and Computing, Indiana University, Bloomington, IN, USA
| | - Ying Ding
- School of Informatics and Computing, Indiana University, Bloomington, IN, USA.,Indiana University Network Institute, Indiana University, Bloomington, IN, USA
| | - Katy Börner
- School of Informatics and Computing, Indiana University, Bloomington, IN, USA.,Department of Information and Library Science, Indiana University, Bloomington, IN, USA.,Indiana University Network Institute, Indiana University, Bloomington, IN, USA.,Center for Complex Network and Systems Research, Indiana University, Bloomington, IN, USA
| |
Collapse
|
47
|
Abstract
Recent reductions in research budgets have led to the need for greater selectivity in resource allocation. Measures of past performance are still among the most promising means of deciding between competing interests. Bibliometry, the mea surement of scientific publications and of their impact on the scientific community, assessed by the citations they attract, provides a portfolio of indicators that can be combined to give a useful picture of recent research activity. In this state-of-the- art review the various methodologies that have been developed are outlined in terms of their strengths, weaknesses and par ticular applications. The present limitations of science indica tors in research evaluation are considered and some future directions for developments in techniques are suggested.
Collapse
Affiliation(s)
- Jean King
- Agricultural and Food Research Council, 160 Great Portland Street, London WIN 6DT, United Kingdom
| |
Collapse
|
48
|
Abstract
Peer-review ratings of 1,983 posters submitted for three annual conferences of a professional society were examined for evidence of bias. Hypotheses derived from the literature on the better-than-average effect were tested by analyzing 7,383 sets of ratings. Reviewers who authored posters gave lower average ratings than reviewers who did not author posters. Posters having authorship that included at least one reviewer received higher ratings than those having only nonreviewing authors. Reviewers' experience and professional role were also explored as biasing factors. The ratings were converted into z scores, and differences in reliability and acceptance decisions were examined. Implications for current peer-review practices are discussed.
Collapse
|
49
|
Kurtz MJ, Henneken EA. Measuring metrics - a 40-year longitudinal cross-validation of citations, downloads, and peer review in astrophysics. J Assoc Inf Sci Technol 2016. [DOI: 10.1002/asi.23689] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Michael J. Kurtz
- Harvard-Smithsonian Center for Astrophysics; 60 Garden Street Cambridge MA 02138
| | - Edwin A. Henneken
- Harvard-Smithsonian Center for Astrophysics; 60 Garden Street Cambridge MA 02138
| |
Collapse
|
50
|
Abstract
The time-honored mechanism of allocating funds based on ranking of proposals by scientific peer review is no longer effective, because review panels cannot accurately stratify proposals to identify the most meritorious ones. Bias has a major influence on funding decisions, and the impact of reviewer bias is magnified by low funding paylines. Despite more than a decade of funding crisis, there has been no fundamental reform in the mechanism for funding research. This essay explores the idea of awarding research funds on the basis of a modified lottery in which peer review is used to identify the most meritorious proposals, from which funded applications are selected by lottery. We suggest that a modified lottery for research fund allocation would have many advantages over the current system, including reducing bias and improving grantee diversity with regard to seniority, race, and gender.
Collapse
|