The battle between OpenAI and Anthropic for leadership in the AI industry is increasingly being fought at the level of compute infrastructure. In an investor memo reportedly circulated this week, OpenAI argued that its early and sustained expansion of compute capacity has given it a meaningful advantage over its rival.

According to the memo, OpenAI pulled ahead of Anthropic by adding capacity “quickly and consistently.” That aggressive infrastructure push, while criticized by some as overly expensive, has allowed the company to keep up more effectively with surging demand for AI products.

The memo appears to have been prompted in part by Anthropic’s announcement of a more powerful AI model called Mythos. For safety reasons, the model is initially being made available only to selected partners through Project Glasswing. Still, some industry observers question whether Anthropic could realistically deploy a model of that scale with its current compute resources.

Some speculation suggests that Mythos could be the first 10-trillion-parameter model. That idea is based in part on public comments from Elon Musk that xAI is currently training a model of that size, as well as benchmarks shared by Nvidia CEO Jensen Huang showing how quickly a new Vera Rubin system could train future 10-trillion-parameter models. By that logic, OpenAI may hold an advantage because it would be better positioned to meet demand for increasingly powerful models such as the upcoming “Spud.” In practice, however, most customers would likely receive a distilled, smaller version of models at that scale.

Anthropic considers designing its own AI chips

Anthropic, meanwhile, is not standing still. According to Reuters, the AI lab is considering designing its own chips. The plans are said to be at an early stage, with no dedicated team and no finalized design, according to one of three sources cited in the report. The company could still decide against the effort and continue relying entirely on outside suppliers.

The thinking reflects the ongoing shortage of AI chips. Anthropic currently uses a mix of Google Tensor Processing Units, or TPUs, and Amazon chips to develop and run its Claude chatbot. Earlier this week, Anthropic also signed a long-term deal with Google and Broadcom tied to a broader commitment to invest $50 billion in US data center infrastructure.

According to industry sources cited by Reuters, designing an advanced AI chip can cost around $500 million. Reuters also reported that Meta and OpenAI are exploring similar efforts. Anthropic declined to comment.

OpenAI puts UK Stargate project on hold

Even as OpenAI highlights its infrastructure lead to investors, the company has paused a major UK data center initiative. Reuters reported that the decision was driven by an unfavorable regulatory environment and high energy costs.

The project, known as Stargate UK, was launched in September 2025 in partnership with Nvidia and Nscale, coinciding with a visit to the UK by US President Donald Trump that resulted in broader investment pledges worth £150 billion. The initiative was intended to strengthen the UK’s sovereign compute capacity and accelerate AI adoption across the country.

OpenAI said it still sees “enormous potential for the UK’s AI future” and plans to move forward once the regulatory and energy environment is more supportive of long-term infrastructure investment. The company added that London remains home to its largest international research hub.

This story highlights how the AI race is increasingly being shaped by infrastructure, not just model performance. Access to compute, energy, and chip supply is becoming a core competitive advantage for leading labs. At the same time, OpenAI’s UK pause shows that even the biggest players remain constrained by regulation, power costs, and local market conditions.