(30 Mar 2026) Last week, Wikipedia announced a new policy regarding the usage of large language models (LLMs) when working on Wikipedia articles.
The policy itself is straightforward: volunteers cannot use LLMs to generate or rewrite article content. The only exceptions are the use of AI for “basic copyedits” and to generate some translations. However, in both cases, human review is required and anyone using AI to perform a translation needs to be skilled enough in both languages to spot any issues.
The new policy follows a 40-2 vote among the site’s editors on March 20 to place heavy restrictions on LLM usage. The policy was first reported by 404 Media and the feedback to it has been overall positive, with Frank Landymore at Futurism saying that this will make Wikipedia a “refuge against AI slop.”
This change has been in development/debate since at least November 2025, with much of the focus being on strengthening the policy and ensuring that it applies to all Wikipedia content, not just new articles.
But now that the policy is here, it has a massive problem: How do you enforce it?
Read more here.




