1
|
Hosseini M, Holcombe AO, Kovacs M, Zwart H, Katz DS, Holmes K. Group authorship, an excellent opportunity laced with ethical, legal and technical challenges. Account Res 2024:1-23. [PMID: 38445637 DOI: 10.1080/08989621.2024.2322557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2024]
Abstract
Group authorship (also known as corporate authorship, team authorship, consortium authorship) refers to attribution practices that use the name of a collective (be it team, group, project, corporation, or consortium) in the authorship byline. Data shows that group authorships are on the rise but thus far, in scholarly discussions about authorship, they have not gained much specific attention. Group authorship can minimize tensions within the group about authorship order and the criteria used for inclusion/exclusion of individual authors. However, current use of group authorships has drawbacks, such as ethical challenges associated with the attribution of credit and responsibilities, legal challenges regarding how copyrights are handled, and technical challenges related to the lack of persistent identifiers (PIDs), such as ORCID, for groups. We offer two recommendations: 1) Journals should develop and share context-specific and unambiguous guidelines for group authorship, for which they can use the four baseline requirements offered in this paper; 2) Using persistent identifiers for groups and consistent reporting of members' contributions should be facilitated through devising PIDs for groups and linking these to the ORCIDs of their individual contributors and the Digital Object Identifier (DOI) of the published item.
Collapse
Affiliation(s)
- Mohammad Hosseini
- Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Galter Health Sciences Library and Learning Center, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Alex O Holcombe
- School of Psychology, University of Sydney, Sydney, Australia
| | - Marton Kovacs
- Institute of Psychology, ELTE Eotvos Lorand University, Budapest, Hungary
- MNB Institute, John von Neumann University, Kecskemét, Hungary
| | - Hub Zwart
- Erasmus School of Philosophy, Erasmus University, Rotterdam, the Netherlands
| | - Daniel S Katz
- National Center for Supercomputing Applications, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Computer Science, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Electrical & Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, USA
- School of Information Sciences, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Kristi Holmes
- Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Galter Health Sciences Library and Learning Center, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| |
Collapse
|
2
|
Blanc Catala I, Di Cosmo R, Giraud M, Le Berre D, Louvet V, Renaudin S. Establishing a national research software award. OPEN RESEARCH EUROPE 2023; 3:185. [PMID: 38009089 PMCID: PMC10674088 DOI: 10.12688/openreseurope.16069.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 11/28/2023]
Abstract
Software development has become an integral part of the scholarly ecosystem, spanning all fields and disciplines. To support the sharing and creation of knowledge in line with open science principles, and particularly to enable the reproducibility of research results, it is crucial to make the source code of research software available, allowing for modification, reuse, and distribution. Recognizing the significance of open-source software contributions in academia, the second French Plan for Open Science, announced by the Minister of Higher Education and Research in 2021, introduced a National Award to promote open-source research software. This award serves multiple objectives: firstly, to highlight the software projects and teams that have devoted time and effort to develop outstanding research software, sometimes for decades, and often with little recognition; secondly, to draw attention to the importance of software as a valuable research output and to inspire new generations of researchers to follow and learn from these examples. We present here an in-depth analysis of the design and implementation of this unique initiative. As a national award established explicitly to foster Open Science practices by the French Minister of Research, it faced the intricate challenge of fairly evaluating open research software across all fields, striving for inclusivity across domains, applications, and participants. We provide a comprehensive report on the results of the first edition, which received 129 high-quality submissions. Additionally, we emphasize the impact of this initiative on the open science landscape, promoting software as a valuable research outcome, on par with publications.
Collapse
Affiliation(s)
| | | | - Mathieu Giraud
- Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, Centre de Recherche en Informatique Signal et Automatique de Lille, F-59000 Lille, France
| | - Daniel Le Berre
- Univ. Artois, CNRS, UMR 8188, Centre de Recherche en Informatique de Lens, F-62300 Lens, France
| | - Violaine Louvet
- Univ. Grenoble Alpes, Grenoble INP Institute of Engineering, CNRS, UMR 5224 LJK, Laboratoire Jean Kuntzman, F-38000 Grenoble, France
| | - Sophie Renaudin
- Direction de la recherche clinique et de l’innovation, Assistance Publique - Hôpitaux de Paris, F-75000 Paris, France
| | - College of experts for source code and software Committee for Open Science
- Ministère de l’Enseignement supérieur et de la Recherche, F-75000 Paris, France
- Inria, Université Paris Cité, F-75000 Paris, France
- Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, Centre de Recherche en Informatique Signal et Automatique de Lille, F-59000 Lille, France
- Univ. Artois, CNRS, UMR 8188, Centre de Recherche en Informatique de Lens, F-62300 Lens, France
- Univ. Grenoble Alpes, Grenoble INP Institute of Engineering, CNRS, UMR 5224 LJK, Laboratoire Jean Kuntzman, F-38000 Grenoble, France
- Direction de la recherche clinique et de l’innovation, Assistance Publique - Hôpitaux de Paris, F-75000 Paris, France
| |
Collapse
|
3
|
Tomaszewski R. Visibility, impact, and applications of bibliometric software tools through citation analysis. Scientometrics 2023; 128:4007-4028. [PMID: 37287881 PMCID: PMC10234239 DOI: 10.1007/s11192-023-04725-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 04/21/2023] [Indexed: 06/09/2023]
Abstract
This study examines the visibility, impact, and applications of bibliometric software tools in the peer-reviewed literature through a "Cited Reference Search" using the Web of Science (WOS) database. A total of 2882 citing research articles to eight bibliometric software tools were extracted from the WOS Core Collection between 2010 and 2021. These citing articles are analyzed by publication year, country, publication title, publisher, open access level, funding agency, and WOS category. Mentions of bibliometric software tools in Author Keywords and KeyWords Plus are also compared. The VOSviewer software is utilized to identify specific research areas by discipline from the keyword co-occurrences of the citing articles. The findings reveal that while bibliometric software tools are making a noteworthy impact and contribution to research, their visibility through referencing, Author Keywords, and KeyWords Plus is limited. This study serves as a clarion call to raise awareness and initiate discussions on the citing practices of software tools in scholarly publications.
Collapse
Affiliation(s)
- Robert Tomaszewski
- California State University, Fullerton, 800 North State College Blvd, Fullerton, CA 92831 USA
| |
Collapse
|
4
|
Hosseini M, Gordijn B, Wafford QE, Holmes KL. A systematic scoping review of the ethics of Contributor Role Ontologies and Taxonomies. Account Res 2023:1-28. [PMID: 36641627 DOI: 10.1080/08989621.2022.2161049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 12/18/2022] [Indexed: 01/16/2023]
Abstract
Contributor Role Ontologies and Taxonomies (CROTs) provide a standard list of roles to specify individual contributions to research. CROTs most common application has been their inclusion alongside author bylines in scholarly publications. With the recent uptake of CROTs among publishers -particularly the Contributor Role Taxonomy (CRediT)- some have anticipated a positive impact on ethical issues regarding the attribution of credit and responsibilities, but others have voiced concerns about CROTs shortcomings and ways they could be misunderstood or have unintended consequences. Since these discussions have never been consolidated, this review collated and explored published viewpoints about the ethics of CROTs. After searching Ovid Medline, Scopus, Web of Science, and Google Scholar, 30 papers met the inclusion criteria and were analyzed. We identified eight themes and 20 specific issues related to the ethics of CROTs and provided four recommendations for CROT developers, custodians, or others seeking to use CROTs in their workflows, policy and practice: 1) Compile comprehensive instructions that explain how CROTs should be used; 2) Improve the coherence of used terms, 3) Translate roles in languages other than English, 4) Communicate a clear vision about future development plans and be transparent about CROTs' strengths and weaknesses. We conclude that CROTs are not the panacea for unethical attributions and should be complemented with initiatives that support social and infrastructural transformation of scholarly publications.
Collapse
Affiliation(s)
- Mohammad Hosseini
- Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Bert Gordijn
- Institute of Ethics, Dublin City University, Dublin, Ireland
| | - Q Eileen Wafford
- Galter Health Sciences Library and Learning Center, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Kristi L Holmes
- Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
- Galter Health Sciences Library and Learning Center, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| |
Collapse
|
5
|
Garijo D, Ménager H, Hwang L, Trisovic A, Hucka M, Morrell T, Allen A. Nine best practices for research software registries and repositories. PeerJ Comput Sci 2022; 8:e1023. [PMID: 36092012 PMCID: PMC9455149 DOI: 10.7717/peerj-cs.1023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
Scientific software registries and repositories improve software findability and research transparency, provide information for software citations, and foster preservation of computational methods in a wide range of disciplines. Registries and repositories play a critical role by supporting research reproducibility and replicability, but developing them takes effort and few guidelines are available to help prospective creators of these resources. To address this need, the FORCE11 Software Citation Implementation Working Group convened a Task Force to distill the experiences of the managers of existing resources in setting expectations for all stakeholders. In this article, we describe the resultant best practices which include defining the scope, policies, and rules that govern individual registries and repositories, along with the background, examples, and collaborative work that went into their development. We believe that establishing specific policies such as those presented here will help other scientific software registries and repositories better serve their users and their disciplines.
Collapse
Affiliation(s)
| | - Hervé Ménager
- Institut Pasteur, Université Paris Cité, Bioinformatics and Biostatistics Hub, Paris, France
| | - Lorraine Hwang
- University of California, Davis, Davis, California, United States
| | - Ana Trisovic
- Harvard University, Boston, Massachusetts, United States
| | - Michael Hucka
- California Institute of Technology, Pasadena, California, United States
| | - Thomas Morrell
- California Institute of Technology, Pasadena, California, United States
| | - Alice Allen
- University of Maryland, College Park, MD, United States
| | | | | |
Collapse
|
6
|
Du C, Cohoon J, Lopez P, Howison J. Understanding progress in software citation: a study of software citation in the CORD-19 corpus. PeerJ Comput Sci 2022; 8:e1022. [PMID: 36091992 PMCID: PMC9454791 DOI: 10.7717/peerj-cs.1022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 06/07/2022] [Indexed: 06/15/2023]
Abstract
In this paper, we investigate progress toward improved software citation by examining current software citation practices. We first introduce our machine learning based data pipeline that extracts software mentions from the CORD-19 corpus, a regularly updated collection of more than 280,000 scholarly articles on COVID-19 and related historical coronaviruses. We then closely examine a stratified sample of extracted software mentions from recent CORD-19 publications to understand the status of software citation. We also searched online for the mentioned software projects and their citation requests. We evaluate both practices of referencing software in publications and making software citable in comparison with earlier findings and recent advocacy recommendations. We found increased mentions of software versions, increased open source practices, and improved software accessibility. Yet, we also found a continuation of high numbers of informal mentions that did not sufficiently credit software authors. Existing software citation requests were diverse but did not match with software citation advocacy recommendations nor were they frequently followed by researchers authoring papers. Finally, we discuss implications for software citation advocacy and standard making efforts seeking to improve the situation. Our results show the diversity of software citation practices and how they differ from advocacy recommendations, provide a baseline for assessing the progress of software citation implementation, and enrich the understanding of existing challenges.
Collapse
Affiliation(s)
- Caifan Du
- The University of Texas at Austin, Austin, TX, United States of America
| | - Johanna Cohoon
- The University of Texas at Austin, Austin, TX, United States of America
| | | | - James Howison
- The University of Texas at Austin, Austin, TX, United States of America
| |
Collapse
|
7
|
Buneman P, Dosso D, Lissandrini M, Silvello G. Data citation and the citation graph. QUANTITATIVE SCIENCE STUDIES 2022. [DOI: 10.1162/qss_a_00166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Abstract
The citation graph is a computational artifact that is widely used to represent the domain of published literature. It represents connections between published works, such as citations and authorship. Among other things, the graph supports the computation of bibliometric measures such as h-indexes and impact factors. There is now an increasing demand that we should treat the publication of data in the same way that we treat conventional publications. In particular, we should cite data for the same reasons that we cite other publications. In this paper we discuss what is needed for the citation graph to represent data citation. We identify two challenges: to model the evolution of credit appropriately (through references) over time and to model data citation not only to a data set treated as a single object but also to parts of it. We describe an extension of the current citation graph model that addresses these challenges. It is built on two central concepts: citable units and reference subsumption. We discuss how this extension would enable data citation to be represented within the citation graph and how it allows for improvements in current practices for bibliometric computations, both for scientific publications and for data.
Collapse
|
8
|
In-code citation practices in open research software libraries. J Informetr 2021. [DOI: 10.1016/j.joi.2021.101139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
9
|
Akhlaghi M, Infante-Sainz R, Roukema BF, Khellat M, Valls-Gabaud D, Baena-Galle R, A. Barba L, Gesing S. Toward Long-Term and Archivable Reproducibility. Comput Sci Eng 2021. [DOI: 10.1109/mcse.2021.3072860] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
10
|
Cosmo RD, Gruenpeter M, Zacchiroli S. Referencing Source Code Artifacts: A Separate Concern in Software Citation. Comput Sci Eng 2020. [DOI: 10.1109/mcse.2019.2963148] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
11
|
Katz DS, Chue Hong NP, Clark T, Fenner M, Martone ME. Software and Data Citation. Comput Sci Eng 2020. [DOI: 10.1109/mcse.2020.2969730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
12
|
Bigatti AM, Carette J, Davenport JH, Joswig M, de Wolff T. Archiving and Referencing Source Code with Software Heritage. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7340894 DOI: 10.1007/978-3-030-52200-1_36] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Software, and software source code in particular, is widely used in modern research. It must be properly archived, referenced, described and cited in order to build a stable and long lasting corpus of scientific knowledge. In this article we show how the Software Heritage universal source code archive provides a means to fully address the first two concerns, by archiving seamlessly all publicly available software source code, and by providing intrinsic persistent identifiers that allow to reference it at various granularities in a way that is at the same time convenient and effective. We call upon the research community to adopt widely this approach.
Collapse
Affiliation(s)
- Anna Maria Bigatti
- Dipartimento di Matematica, Università degli Studi di Genova, Genova, Genova Italy
| | - Jacques Carette
- Computing and Software, McMaster University, Hamilton, ON Canada
| | | | - Michael Joswig
- Institut für Mathematik, MA 6-2, TU Berlin, Berlin, Germany
| | - Timo de Wolff
- Technische Universität Braunschweig, Braunschweig, Germany
| |
Collapse
|
13
|
|
14
|
Abstract
Background: Evaluation of the quality of research software is a challenging and relevant issue, still not sufficiently addressed by the scientific community. Methods: Our contribution begins by defining, precisely but widely enough, the notions of research software and of its authors followed by a study of the evaluation issues, as the basis for the proposition of a sound assessment protocol: the CDUR procedure. Results: CDUR comprises four steps introduced as follows: Citation, to deal with correct RS identification, Dissemination, to measure good dissemination practices, Use, devoted to the evaluation of usability aspects, and Research, to assess the impact of the scientific work. Conclusions: Some conclusions and recommendations are finally included. The evaluation of research is the keystone to boost the evolution of the Open Science policies and practices. It is as well our belief that research software evaluation is a fundamental step to induce better research software practices and, thus, a step towards more efficient science.
Collapse
Affiliation(s)
- Teresa Gomez-Diaz
- Laboratoire d'Informatique Gaspard-Monge, Centre National de la Recherche Scientifique, University of Paris-Est Marne-la-Vallée, Marne-la-Vallée, France
| | | |
Collapse
|