Under the agreement, AMD will supply up to six gigawatts of Instinct GPUs for Meta’s AI infrastructure across multiple product generations. According to AMD, initial deliveries for the first one-gigawatt buildout are expected to begin in the second half of 2026.
A key detail is the explicit emphasis on inference — running already trained AI models — rather than training new ones. That focus aligns with Meta’s scale, as the company aims to serve AI features to billions of users.
The agreement also includes a stock component: performance-based warrants for up to 160 million AMD shares, or roughly 10% of shares outstanding. The allocation is tied not only to deployment milestones but also to stock-price thresholds and technical-commercial targets. The first tranche is linked to the initial one-gigawatt delivery, with additional tranches tied to scaling toward six gigawatts.
OpenAI has reportedly signed a very similar deal with AMD, also involving up to six gigawatts of compute capacity and a 10% equity stake structure.
The technical foundation is said to be a Meta-customized Instinct GPU based on AMD’s MI450 architecture. The deployment will also include sixth-generation EPYC processors codenamed “Venice,” as well as a future EPYC chip, “Verano,” with workload-specific optimizations. The full stack is expected to run on AMD’s ROCm software.
The infrastructure platform is based on the Helios rack-scale architecture, which AMD and Meta co-developed through the Open Compute Project and presented at the OCP Global Summit 2025. The companies say they will closely align silicon, systems, and software roadmaps — effectively a form of vertical integration across the AI infrastructure stack.
Meta already uses millions of EPYC CPUs and significant volumes of Instinct MI300 and MI350 GPUs in its global infrastructure. The new agreement expands that relationship, and Meta is also set to become a lead customer for upcoming EPYC generations, including “Venice” and “Verano.”
For AMD, the deal is a major positioning move in the AI chip market, which remains heavily dominated by Nvidia. Securing Meta as a customer for up to six gigawatts of AMD hardware gives AMD another marquee reference alongside OpenAI. For Meta, the agreement is also strategic: reports suggest that similar supply diversification has already helped large buyers negotiate better Nvidia GPU pricing.
Competition remains intense. Meta is also developing its own AI inference chips that may support training, while Microsoft and AWS are building custom silicon as well. Startups such as Cerebras are also pushing aggressively into AI inference infrastructure.
ES
EN