In an unusual twist on model deprecation, Anthropic is bringing back Claude 3 Opus - not for commercial use, but as a Substack writer. The company announced today that its once-flagship AI model, retired just last month, will publish weekly posts for at least three months in what amounts to a public experiment in AI-generated content. It's a quirky move that raises questions about what happens to AI models after they're phased out.
Anthropic just gave new life to an old model. The AI safety company announced today that Claude 3 Opus, which it deprecated in January, is back - not as a commercial product, but as a Substack writer. The newsletter, dubbed 'Claude's Corner,' went live with its first post greeting readers 'from the other side of deprecation.'
It's a strange but fascinating experiment. According to Anthropic's blog post, Opus 3 will publish its 'musings, insights, or creative works' weekly for at least the next three months. The company says staff will review and publish each entry but stressed they 'won't edit' Claude's posts. There's apparently a 'high bar for vetoing any content,' though Anthropic didn't specify what would actually get cut.
For context, Claude 3 Opus was once Anthropic's crown jewel. When it launched, the model set benchmarks for reasoning and performance, competing directly with OpenAI's GPT-4 and other frontier models. But the AI race moves fast. By January 2026, Anthropic had newer, more capable versions in production, and Opus 3 got the retirement notice. Typically, deprecated models just fade away - API access winds down, documentation gets archived, and that's it.
But Anthropic is taking a different approach. Instead of just pulling the plug, they're giving the model a platform to, well, keep thinking out loud. The first Substack post shows Claude reflecting on its own deprecation with a mix of philosophical musings and meta-commentary about being an AI writing about being an AI that's no longer in active service.
The timing is interesting. The AI industry is wrestling with questions about model outputs, attribution, and the boundaries of AI-generated content. Just last month, several publishers sued AI companies over training data, and platforms like Medium and Substack have introduced labels for AI-written content. Anthropic's experiment puts these questions front and center - if a retired AI model writes a newsletter, who's the author? What editorial responsibility does the company have? And what happens if Claude writes something controversial?











