(13 Apr 2026) Frontiers released unique-in-publishing AI guidance covering the entire publication lifecycle – from researchers to editors and peer reviewers – moving beyond simplistic “allowed / not allowed” rules toward practical, responsible routes for AI adoption, while calling for policy to evolve in step with real-world AI use by researchers and reviewers.
The guidance responds to what is already majority practice across the sector, as highlighted in Frontiers’ recent whitepaper, which showed that most peer reviewers now use AI and policy must keep pace. AI is already embedded across publication stages and this requires structured, transparent governance rather than ad hoc controls.
This is the first framework to provide clear, operational routes forward for AI use in every publishing role (whether researcher, editor, or reviewer), promoting AI use that is accountable, transparent, risk-aware, and innovation-enabling.
Rather than roadblocking AI, Frontiers retains the principle that the human remains accountable and translates it into responsible practice through the BE WISE framework:
- B — Be transparent
- E — Ensure accountability
- W — Work with the right tools
- I — Inform yourself
- S — Safeguard integrity
- E — Embed equity
Taken together, BE WISE principles provide a structured way forward — enabling innovation while protecting research integrity.
The guidance introduces structured “permission-to-proceed” checkpoints across all roles, operationalizing the BE WISE framework. Researchers, editors, and reviewers are advised to use AI only if they can answer yes to four core checks at every key point:
- Impact and oversight
- Policies and governance
- Permitted inputs
- Verification
If not, AI use should remain limited to low-impact tasks or not be used.
Read more here.




