Anthropic says annualized revenue run rate passes $30 billion as it adds 3.5 gigawatts of TPU capacity
Anthropic disclosed this week that its annualized revenue run rate has passed $30 billion and that more than 1,000 business customers are now spending over $1 million a year on its services, a rapid commercial step-up for one of the most closely watched AI companies. The company made the disclosure alongside a new infrastructure agreement that will give it roughly 3.5 gigawatts of Google TPU capacity starting in 2027, routed through Broadcom under a supply arrangement announced in securities filings on April 7, 2026.
Anthropic ties growth to a multi-gigawatt compute buildout
The new capacity commitment matters because it puts Anthropic’s expansion on an industrial footing. The 3.5 gigawatts figure is in addition to about 1 gigawatt of Google Cloud capacity already expected to come online in 2026 under an earlier agreement, according to the company’s disclosures. Anthropic said the expanded compute is designed to support continued commercial growth, while Broadcom said its role includes supplying networking and related components for Google’s next-generation AI racks through 2031.
The deal also shows how AI infrastructure is becoming a three-way stack rather than a simple cloud purchase. Google owns the TPU architecture and software ecosystem, Broadcom helps translate that design into manufacturable silicon and system components, and TSMC handles fabrication. That structure is increasingly important as frontier model providers race to secure enough capacity to train and run larger systems at scale.
Enterprise demand is becoming the clearest metric
Anthropic’s claim that its annualized revenue run rate has crossed $30 billion is notable not just as a financial milestone, but as evidence that enterprise usage is deepening fast enough to justify massive capital commitments. The company said its customer base now includes more than 1,000 organizations spending above $1 million annually, a threshold that suggests AI is moving further into paid production workflows rather than pilot programs.
That kind of spend profile is especially significant for Anthropic’s Claude products, which have been marketed heavily to developers and large organizations seeking coding, analysis, and workflow automation tools. In practice, the revenue and capacity announcements point to the same conclusion: the competitive bottleneck in frontier AI is no longer only model quality, but reliable access to enough compute to serve growing commercial demand.
The next phase of AI competition is infrastructure, not just software
The timing of the disclosure also reflects a broader shift in the market. Major AI companies are now making long-horizon commitments to power, chips, and data-center infrastructure years before those systems come online. For Anthropic, that means locking in compute well ahead of demand, even as its product line continues to evolve and enterprise adoption expands.
With the new TPU capacity scheduled to begin arriving in 2027, the company is signaling that its growth plan extends far beyond the current product cycle. The more immediate takeaway is that one of the sector’s fastest-growing players is now operating at a scale where gigawatts, not just model releases, are part of the news.
Source: The Information
Date: 2026-04-07T11:26:13Z