1
|
Rastogi C, Stelmakh I, Beygelzimer A, Dauphin YN, Liang P, Wortman Vaughan J, Xue Z, Daumé III H, Pierson E, Shah NB. How do authors' perceptions of their papers compare with co-authors' perceptions and peer-review decisions? PLoS One 2024; 19:e0300710. [PMID: 38598482 PMCID: PMC11006147 DOI: 10.1371/journal.pone.0300710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/04/2024] [Indexed: 04/12/2024] Open
Abstract
How do author perceptions match up to the outcomes of the peer-review process and perceptions of others? In a top-tier computer science conference (NeurIPS 2021) with more than 23,000 submitting authors and 9,000 submitted papers, we surveyed the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews. The salient results are: (1) Authors had roughly a three-fold overestimate of the acceptance probability of their papers: The median prediction was 70% for an approximately 25% acceptance rate. (2) Female authors exhibited a marginally higher (statistically significant) miscalibration than male authors; predictions of authors invited to serve as meta-reviewers or reviewers were similarly calibrated, but better than authors who were not invited to review. (3) Authors' relative ranking of scientific contribution of two submissions they made generally agreed with their predicted acceptance probabilities (93% agreement), but there was a notable 7% responses where authors predicted a worse outcome for their better paper. (4) The author-provided rankings disagreed with the peer-review decisions about a third of the time; when co-authors ranked their jointly authored papers, co-authors disagreed at a similar rate-about a third of the time. (5) At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process. The stakeholders in peer review should take these findings into account in setting their expectations from peer review.
Collapse
Affiliation(s)
- Charvi Rastogi
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | | | | | | | - Percy Liang
- Department of Computer Science, Stanford University, Stanford, California, United States of America
| | | | | | - Hal Daumé III
- Department of Computer Science, University of Maryland, College Park, Maryland, United States of America
| | - Emma Pierson
- Jacobs Technion-Cornell Institute, Cornell Tech, New York, New York, United States of America
| | - Nihar B. Shah
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
2
|
Cousens R. Why can't we make research grant allocation systems more consistent? A personal opinion. Ecol Evol 2019; 9:1536-1544. [PMID: 30847053 PMCID: PMC6392383 DOI: 10.1002/ece3.4855] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 12/07/2018] [Indexed: 12/30/2022] Open
Abstract
Uncertainty is expected to enter into our grant allocation processes at many points, not limited to those directly involving assessment by peers. The selection of grants for funding is thus prodigiously low in statistical power and will remain so. The replacement of current systems with some form of lottery, as has been proposed, seriously risks weakening the quality of applications. Opportunities exist for agencies to encourage and reward greater clarity and innovation in research outcomes.
Collapse
Affiliation(s)
- Roger Cousens
- School of BioSciencesThe University of MelbourneParkvilleVictoriaAustralia
| |
Collapse
|
3
|
Abstract
We review the literature to identify common problems of decision-making in individuals and groups. We are guided by a Bayesian framework to explain the interplay between past experience and new evidence, and the problem of exploring the space of hypotheses about all the possible states that the world could be in and all the possible actions that one could take. There are strong biases, hidden from awareness, that enter into these psychological processes. While biases increase the efficiency of information processing, they often do not lead to the most appropriate action. We highlight the advantages of group decision-making in overcoming biases and searching the hypothesis space for good models of the world and good solutions to problems. Diversity of group members can facilitate these achievements, but diverse groups also face their own problems. We discuss means of managing these pitfalls and make some recommendations on how to make better group decisions.
Collapse
Affiliation(s)
- Dan Bang
- Wellcome Trust Centre for Neuroimaging, University College London, London WC1N 3BG, UK
- Interacting Minds Centre, Aarhus University, 8000 Aarhus, Denmark
| | - Chris D. Frith
- Wellcome Trust Centre for Neuroimaging, University College London, London WC1N 3BG, UK
- Institute of Philosophy, University of London, London WC1E 7HU, UK
| |
Collapse
|