Nvidia just landed one of the biggest AI infrastructure deals in history. The chipmaker announced a multiyear strategic partnership with Thinking Machines Lab to deploy at least one gigawatt of next-generation Vera Rubin systems - a massive commitment targeting early 2027 deployment. The deal underscores how frontier AI model training is driving unprecedented demand for computing power, with companies racing to secure the hardware needed to stay competitive in the AI arms race.
Nvidia is doubling down on its AI infrastructure dominance with a partnership that pushes the boundaries of what gigawatt-scale computing actually means. The company announced today a multiyear strategic deal with Thinking Machines Lab to deploy at least one gigawatt of its upcoming Vera Rubin systems, marking one of the most substantial AI infrastructure commitments publicly disclosed to date.
The timing matters. As AI labs race to train increasingly sophisticated frontier models, access to cutting-edge hardware has become the ultimate bottleneck. Thinking Machines Lab is betting that securing massive compute capacity now will give it a critical advantage in delivering what the company calls "customizable AI at scale" - a growing demand as enterprises move beyond off-the-shelf models toward specialized systems tailored to specific use cases.
Nvidia's Vera Rubin platform represents the company's next evolution in AI accelerators, designed specifically for the massive parallel workloads that frontier model training demands. While technical specifications remain under wraps, the gigawatt power designation signals infrastructure on a scale previously reserved for small cities. For context, a typical large data center operates in the tens of megawatts range, making this deployment roughly 20-50 times that magnitude.
The deployment timeline targets early 2027, giving both companies about 10 months to prepare the infrastructure, power delivery, and cooling systems required for this scale of operation. That's an aggressive schedule considering the complexity of gigawatt-class facilities, but it reflects the breakneck pace at which AI infrastructure is evolving.











