(14 Jul 2023) On 23 June, NIH banned the use of online generative AI tools like ChatGPT “for analyzing and formulating peer-review critiques”—likely spurred in part by a letter from Siegle, who is at the University of Pittsburgh, and colleagues. After the conference they warned the agency that allowing ChatGPT to write grant reviews is “a dangerous precedent.” In a similar move, the Australian Research Council (ARC) on 7 July banned generative AI for peer review after learning of reviews apparently written by ChatGPT.
Other agencies are also developing a response. The U.S. National Science Foundation has formed an internal working group to look at whether there may be appropriate uses of AI as part of the merit review process, and if so what “guardrails” may be needed, a spokesperson says. And the European Research Council expects to discuss AI for both writing and evaluating proposals.
For the funding agencies, confidentiality tops the list of concerns. When parts of a proposal are fed into an online AI tool, the information becomes part of its training data. NIH worries about “where data are being sent, saved, viewed, or used in the future,” its notice states.
Critics also worry that AI-written reviews will be error-prone (the bots are known to fabricate), biased against nonmainstream views because they draw from existing information, and lack the creativity that powers scientific innovation. “The originality of thought that NIH values is lost and homogenized with this process and may even constitute plagiarism,” NIH officials wrote on a blog. For journals, reviewer accountability is also a concern. “There’s no guarantee the [reviewer] understands or agrees with the content” they’re providing, says Kim Eggleton, who heads peer review at IOP Publishing.
Science has the article in full.