Tang BL. Are there accurate and legitimate ways to machine-quantify
predatoriness, or an urgent need for an automated online tool?
Account Res 2023:1-6. [PMID:
37640512 DOI:
10.1080/08989621.2023.2253425]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 08/26/2023] [Accepted: 08/26/2023] [Indexed: 08/31/2023]
Abstract
Yamada and Teixeira da Silva voiced valid concerns with the inadequacies of an online machine learning-based tool to detect predatory journals, and stressed on the urgent need for an automated, open, online-based semi-quantitative system that measures "predatoriness". We agree that the said machine learning-based tool lacks accuracy in its demarcation and identification of journals outside those already found within existing black and white lists, and that its use could have undesirable impact on the community. We note further that the key characteristic of journals being predatory, namely a lack of stringent peer review, would normally not have the visibility necessary for training and informing machine learning-based online tools. This, together with the gray zone of inadequate scholarly practice and the plurality in authors' perception of predatoriness, makes it desirable for any machine-based, quantitative assessment to be complemented or moderated by a community-based, qualitative assessment that would do more justice to both journals and authors.
Collapse