Of editorial processes, AI models, and medical literature: the Magnetic Resonance Audiometry experiment

AbstractThe potential of artificial intelligence (AI) in the field of medical research is unquestionable. Nevertheless, the scientific community has raised several concerns about a possible fraudulent use of these tools that might be used to generate inaccurate or, in extreme cases, erroneous messages that could find their way into the literature. In this experiment, we asked a generative AI program to write a technical report on a non-existing Magnetic Resonance Imaging technique called Magnetic Resonance Audiometry, receiving in return a full seemingly technically sound report, substantiated by equations and references. We have submitted this report to an international peer-reviewed indexed journal, passing the first round of review with only minor changes requested. With this experiment, we showed that the current peer-review system, already burdened by the overwhelming increase in number of publications, might be not ready to also handle the explosion of these techniques, showing the urgent need for the entire community to address both the issue of generative AI in scientific literature and probably a more profound discussion on the entire peer-review process.Clinical relevance statementGenerative AI models are shown to be able to create a full manuscript without any human intervention that can survive peer-review. Given the explosion of these techniques, a profound discussion on the entire peer-review process by the scientific community is mandatory.Key Points•The scie...
Source: European Radiology - Category: Radiology Source Type: research