Wikipedia just drew a hard line in the sand on AI-generated content. The platform updated its official guidelines late last week to ban editors from writing or rewriting articles using large language models, marking one of the most significant pushbacks against AI content from a major information platform. The move comes as AI tools flood the internet with generated text, and Wikipedia's decision signals growing concerns about content quality and authenticity in the age of generative AI.
Wikipedia just became the highest-profile platform to reject AI-generated content outright. The free encyclopedia updated its official guidelines to explicitly prohibit editors from using large language models to write or rewrite articles, citing the technology's tendency to violate "several of Wikipedia's core content policies."
The timing couldn't be more striking. While Google, Microsoft, and Meta race to weave AI into every product they ship, Wikipedia is pumping the brakes hard. The platform's volunteer editor community has watched AI tools from OpenAI and Anthropic become ubiquitous across the web, and they're not impressed with what they're seeing.
The ban specifically targets the English version of Wikipedia, though it sets a precedent that other language editions will likely follow. According to The Verge's coverage, the policy isn't a total lockout - editors can still use AI for narrow, supervised tasks. Large language models can suggest basic copyedits, but only if the tool "does not introduce content of its own." That's a crucial distinction that keeps human judgment in the driver's seat.
Translation gets a pass too. Editors can use AI to convert articles from other language versions of Wikipedia into English, though they'll need to review and verify everything before publishing. It's a pragmatic compromise that acknowledges AI's strengths in mechanical tasks while maintaining Wikipedia's famously rigorous standards for accuracy and sourcing.











