Anthropic expands Google chip deal as AI compute race intensifies
Anthropic said on April 7, 2026, that it has expanded its partnership with Google and Broadcom in a deal that will add multiple gigawatts of next-generation TPU capacity starting in 2027, a sign of how aggressively major artificial intelligence companies are locking up the compute needed to train and run frontier models.
The company said the agreement is its most significant compute commitment to date. The move comes as Anthropic says customer demand for Claude and its coding tools continues to climb, forcing the startup to scale infrastructure at a pace that increasingly resembles the capital plans of much larger technology firms.
Anthropic deepens its reliance on Google chips
According to Anthropic, the new arrangement will expand its use of Google’s tensor processing units, or TPUs, which are designed for machine learning workloads. ITPro reported that the deal covers 3.5 gigawatts of TPU capacity, with the vast majority of the infrastructure expected to be physically located in the United States.
The company has already been using a mix of TPUs, Amazon Trainium chips and Nvidia GPUs. Anthropic said the latest agreement extends that strategy rather than replacing it, giving the company more room to meet demand as its products spread across consumer and enterprise use cases.
Compute demand remains central to the AI business
The announcement underscores a broader shift in the AI sector: access to chips, power and data-center space has become as strategically important as model performance. Anthropic said the new capacity will come online from 2027, suggesting the company is planning well ahead for continued growth in usage and training requirements.
In the same announcement, Anthropic said its annualized revenue run rate has passed $30 billion and that more than 1,000 business customers now spend over $1 million a year on its services. Those figures point to a business that is scaling quickly, but also one that is likely to keep consuming large amounts of compute to support that growth.
Broadcom and Google gain from the infrastructure push
Broadcom co-designs Google’s TPUs, and the chips are manufactured by Taiwan Semiconductor Manufacturing Co., according to ITPro. For Google, the deal reinforces the role of its in-house silicon as a competitive alternative to Nvidia’s GPUs in the AI infrastructure market.
For Anthropic, the agreement adds another layer of supply at a time when major AI developers are competing for limited hardware and power resources. The company did not say how much the deal is worth, and it did not immediately clarify which specific TPU generations will be deployed.
What to Watch
The key question is whether more AI developers follow Anthropic’s lead and secure long-term chip capacity rather than relying on short-term cloud access. If demand continues to rise, the industry’s next bottleneck may be less about model quality than about who can secure enough power, silicon and data-center space to keep those models running.
Source Reference
Primary source: IT Pro
Source date: 2026-04-07T00:00:00Z
Reference: Read original source