G42, the Abu Dhabi-based AI powerhouse, just announced a massive infrastructure partnership with chipmaker Cerebras to deploy eight exaflops of compute capacity in India. The deal, revealed at the India AI Impact Summit 2026, represents one of the largest AI infrastructure investments in Asia and signals a major shift in the global AI compute landscape. It's a bold move that positions India as a critical node in the emerging AI supply chain while cementing G42's role as a bridge between Middle Eastern capital and Asian tech ambitions.
G42 is making a massive bet on India's AI future. The Abu Dhabi tech giant just unveiled plans to deploy eight exaflops of compute capacity across India in partnership with Cerebras, the U.S. chipmaker known for its wafer-scale engine technology. The announcement, made at the India AI Impact Summit 2026, marks one of the largest AI infrastructure commitments in Asia and comes as the region faces a critical shortage of compute resources.
The timing couldn't be more strategic. India's been scrambling to build out AI infrastructure as demand from startups and enterprises skyrockets. According to industry estimates, the country currently has less than 2% of global AI compute capacity despite having one of the world's largest developer populations. This deal could change that calculus overnight.
Cerebras brings something different to the table than the usual Nvidia GPU clusters dominating the market. The company's CS-3 systems use wafer-scale engines - essentially chips the size of dinner plates - that can train large language models faster and more efficiently than traditional setups. For G42, which has been aggressively expanding its AI infrastructure footprint across the Middle East and Asia, Cerebras offers a way to differentiate from competitors betting entirely on Nvidia's ecosystem.
The eight exaflops figure is eye-popping. To put it in perspective, that's roughly equivalent to eight quintillion floating-point operations per second, enough computational power to train multiple frontier AI models simultaneously. It's the kind of capacity typically reserved for national supercomputing initiatives or hyperscale cloud providers.












