In the memo, leaked by The Verge, OpenAI Chief Revenue Officer Denise Dresser set out the company’s strategic direction for the second quarter. Alongside five core priorities for the enterprise business, the document also includes unusually sharp criticism of rival Anthropic, as well as references to previously unknown products and internal codenames.

The central argument is that enterprise AI is entering a more mature phase. Model performance alone is no longer enough. Customers now want to know how well AI fits into their workflows, control systems, and day-to-day operations. According to Dresser, OpenAI believes the main constraint is no longer demand, but capacity. She also suggests that multi-year enterprise deals worth hundreds of millions of dollars are becoming more common.

New model “Spud” seen as the foundation for a future “super app”

Dresser mentions a new model under the internal codename “Spud,” describing it as an important step in building the intelligence foundation for the next generation of work. Based on early customer feedback, the model reportedly delivers stronger reasoning, a better understanding of intent and dependencies, and more reliable production performance.

According to the memo, Spud is expected to make all of OpenAI’s core products significantly better and is part of an iterative deployment strategy: push the boundaries, integrate advances into real products, learn from real-world usage, and feed those insights into better systems on the path toward a broader “super app.” Dresser also argues that OpenAI’s compute advantage is already showing up for customers in the form of higher token limits, lower latency, and more dependable execution of complex workflows.

From prompts to agents: “Frontier” as a platform strategy

Dresser writes that the market is shifting from prompts to agents. Customers increasingly want systems that can use tools autonomously, operate across workflows, and function reliably inside real business environments. To do that, they need orchestration, control, security, and governance.

OpenAI aims to address this with an agent platform called “Frontier,” which it reportedly wants to position as the standard platform for enterprise agents. The logic is straightforward: better models increase the value of the platform, deeper integration raises switching costs, and every workflow running through the system makes OpenAI harder to replace. Dresser states the ambition directly: this is how the company moves from being a product vendor to becoming operational infrastructure.

Amazon partnership seen as a counterweight to Microsoft

According to the memo, the Microsoft partnership has been fundamental to OpenAI’s success, but it has also limited the company’s ability to meet enterprises where they already work. For many organizations, that means Amazon Bedrock.

Since the Amazon partnership was announced in late February, demand has reportedly been “frankly overwhelming.” Dresser also refers to something called the “Amazon Stateful Runtime Environment,” which appears designed to go beyond simple model access by enabling memory, context, and continuity across interactions. That would allow AI systems to function more reliably over time and across complex business processes.

She highlights three main benefits: lower adoption barriers for AWS-native customers, a stronger position in regulated industries, and deeper integration all the way into production runtime environments for multi-step agents.

A full-stack strategy and a deployment service called “DeployCo”

The memo presents OpenAI as a platform with multiple entry points: ChatGPT for Work for knowledge workers, Codex for software development, the API for embedded intelligence, Frontier as the agent platform, and the Amazon runtime for production-grade execution.

Dresser writes that OpenAI should stop thinking of itself as a company with separate product lines. Instead, it should build a flywheel: better models drive more usage, more usage drives deeper integration, deeper integration drives multi-product adoption, and multi-product adoption makes OpenAI harder to replace.

In this framework, the biggest bottleneck in enterprise AI is no longer whether the technology works, but whether companies can deploy it successfully and at scale. To address that, OpenAI is reportedly planning a service called “DeployCo,” which would work together with so-called “Frontier Alliance” partners as a deployment engine.

Direct attack on Anthropic: inflated revenue and not enough compute

The sharpest section of the memo is aimed at Anthropic. Dresser accuses the company of building its narrative around fear, restriction, and the idea that a small elite should control AI. By contrast, she argues that OpenAI’s more positive message will win out over time. She describes the competitive landscape as more intense than ever.

She also claims Anthropic made a strategic mistake by failing to secure enough compute capacity early enough, and says customers are already feeling the effects through rate limiting, weaker availability, and a less reliable product experience. In her view, OpenAI recognized the exponential importance of compute earlier and acted faster.

Dresser adds that Anthropic’s focus on coding tools may have given it an early advantage, but argues that such a narrow positioning could become a weakness in a platform market as AI expands beyond developers into every team and industry.

The most aggressive claim concerns Anthropic’s finances. According to Dresser, the company’s reported revenue run rate is inflated because it uses an accounting approach that makes revenue appear larger than it really is by booking revenue-share payments to Amazon and Google on a gross basis. Based on OpenAI’s internal analysis, she says this overstates Anthropic’s run rate by around $8 billion relative to the currently cited $30 billion figure. OpenAI, by contrast, reportedly books its Microsoft revenue share on a net basis, which Dresser says is closer to the standards OpenAI would likely be expected to follow as a public company.