Thomas Dohmke, the former CEO who helped turn GitHub into a developer powerhouse, just pulled off one of the largest seed rounds in tech history. His stealth startup landed $60 million at a $300 million valuation to build AI systems that help developers wrangle the flood of code generated by AI agents. The round signals investor confidence that the next frontier in developer tools isn't just writing code - it's managing what AI creates.
Thomas Dohmke isn't wasting time between gigs. Less than a year after stepping down as CEO of GitHub, he's back with a bang - and a war chest that would make most Series B companies jealous. His new venture just closed a $60 million seed round at a $300 million post-money valuation, according to TechCrunch. The round was led by Felicis Ventures, though the startup itself remains in stealth mode.
The numbers are eye-popping even by 2026 standards. Seed rounds typically hover in the $2-5 million range, making this roughly 12-30 times the norm. But Dohmke's pedigree and the problem he's tackling justify the premium. During his tenure at GitHub, he oversaw the integration of GitHub Copilot, the AI pair programmer that's now used by millions of developers worldwide. He knows the AI coding space intimately - and he's betting the real challenge isn't generating code anymore, it's managing it.
Here's the problem Dohmke is solving: AI coding assistants like Copilot, Cursor, and Replit are incredibly productive at churning out code. But they're also creating a new headache. Developers now face an avalanche of AI-generated functions, libraries, and scripts that need to be reviewed, tested, integrated, and maintained. It's like hiring an army of junior developers who never sleep but also never learned about code quality or documentation.
The startup's AI system aims to act as a management layer between human developers and AI code generators. Think of it as a quality control and orchestration platform that ensures AI-written code actually fits into existing codebases without breaking things or introducing security vulnerabilities. Details remain scarce since the company hasn't officially launched, but sources familiar with the matter suggest the platform uses its own AI models to analyze, categorize, and flag potential issues in AI-generated code before it reaches production.












