Are there accurate and legitimate ways to machine-quantify predatoriness, or an urgent need for an automated online tool?

Account Res. 2023 Aug 31:1-6. doi: 10.1080/08989621.2023.2253425. Online ahead of print.ABSTRACTYamada and Teixeira da Silva voiced valid concerns with the inadequacies of an online machine learning-based tool to detect predatory journals, and stressed on the urgent need for an automated, open, online-based semi-quantitative system that measures "predatoriness". We agree that the said machine learning-based tool lacks accuracy in its demarcation and identification of journals outside those already found within existing black and white lists, and that its use could have undesirable impact on the community. We note further that the key characteristic of journals being predatory, namely a lack of stringent peer review, would normally not have the visibility necessary for training and informing machine learning-based online tools. This, together with the gray zone of inadequate scholarly practice and the plurality in authors' perception of predatoriness, makes it desirable for any machine-based, quantitative assessment to be complemented or moderated by a community-based, qualitative assessment that would do more justice to both journals and authors.PMID:37640512 | DOI:10.1080/08989621.2023.2253425
Source: Accountability in Research - Category: Medical Ethics Authors: Source Type: research