Using AI in Peer Review Is a Breach of Confidentiality  

“As the scientific community continues to evolve, it is essential to leverage the latest technologies to improve and streamline the peer-review process. One such technology that shows great promise is artificial intelligence (AI). AI-based peer review has the potential to make the process more efficient, accurate, and impartial, ultimately leading to better quality research.” Amy Wernimont, Ph.D., Chief of Staff, NIH Center for Scientific Review Stephanie Constant, Ph.D., Review Policy Officer, NIH I suspect many of you were not fooled into thinking that was me who wrote that statement. A well-known AI tool wrote those words after I prompted it to discuss using AI in the peer review process. More and more, we are hearing stories about how researchers may use these tools when reviewing others’ applications, and even writing their own applications. Even if AI tools may have “great promise,” do we allow their use? Reviewers are trusted and required to maintain confidentiality throughout the application review process. Thus, using AI to assist in peer review would involve a breach of confidentiality. In a recently released guide notice, we explain that NIH scientific peer reviewers are prohibited from using natural language processors, large language models, or other generative AI technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. Reviewers have long been required to certify and sign an...
Source: NIH Extramural Nexus - Category: Research Authors: Tags: blog Open Mike applications Peer review Research integrity Source Type: funding