When a single customer signs up for “millions” of AI chips, that is more than a routine supply agreement.
Meta Platforms has agreed to deploy millions of Nvidia processors and related hardware over the next several years as part of an expanded, multigenerational AI infrastructure deal, Meta and Nvidia said in a joint announcement.
Nvidia described the agreement as covering a broad range of its hardware portfolio, from GPUs to CPUs and full systems, in a release titled “Meta Builds AI Infrastructure With NVIDIA.”
Nvidia shares rose about 1.9% in premarket trading after the news, while Meta gained roughly 0.7% and rival Broadcom slipped modestly, Barron’s reported.
That mix tells you investors see this as a clear positive for Nvidia, a supportive signal for Meta’s AI ambitions, and a mild negative for competitors that had been chipping away at Nvidia’s dominance.
Photo by VCG on Getty Images
What Nvidia and Meta actually agreed to
The headlines say “millions of chips,” but the real story is how broad the product lineup is.
According to Nvidia and Meta, the deal includes:
- GPUs
Nvidia Blackwell GPUs for current and upcoming AI workloads Next‑generation Rubin GPUs as Meta upgrades its clusters over time - CPUs
Grace CPUs, Nvidia’s Arm‑based server processors tailored for AI and high‑performance computing Vera CPUs, a next‑generation family that can power both AI inference and general compute - Systems and networking
Vera Rubin systems that bundle GPUs, CPUs, networking, and software into rack‑scale AI building blocks Nvidia networking gear to connect these systems inside and across Meta’s data centers
Meta will use these chips for both training and running AI models in its own facilities, while also tapping Nvidia Cloud Partner capacity when it needs additional compute, Nvidia said.
Related: Bank of America resets Nvidia stock forecast
Meta CEO Mark Zuckerberg said the expanded collaboration supports the company’s mission “to provide personal superintelligence to everyone across the globe,” referring back to a roadmap he laid out in July, CNBC reported.
Nvidia CEO Jensen Huang said Meta and Nvidia have worked in “deep co‑design” across GPUs, CPUs, networking, and software to build out this AI infrastructure, Nvidia said in its announcement.
In plain English, Meta is committing to ride Nvidia’s roadmap for at least the next few cycles.
The capex and chip supply story behind the Meta-Nvidia deal
Meta has been telegraphing huge AI infrastructure spending for months. This deal shows more clearly where that money is headed.
Meta has said it could invest as much as $135 billion in AI through 2026, a figure that includes data centers, chips, and supporting infrastructure, CNBC reported.
Ben Bajarin, CEO of Creative Strategies, told CNBC that this Nvidia expansion is “certainly in the tens of billions” and that “a significant portion of Meta’s capital expenditures will be directed towards this Nvidia expansion.”
More AI Stocks:
- Morgan Stanley sets jaw-dropping Micron price target after event
- Bank of America updates Palantir stock forecast after private meeting
- Morgan Stanley drops eye-popping Broadcom price target
This also matters for chip supply. Nvidia’s high‑end Blackwell GPUs have been back‑ordered, and Rubin production is just ramping up, CNBC noted. By signing a multiyear pact, Meta is effectively reserving a dedicated slice of Nvidia’s pipeline for itself.
Meta will be among the first to deploy full racks of Nvidia’s Vera Rubin systems, giving it an early advantage in rolling out new AI services, Yahoo Finance noted.
The deal also pushes Nvidia deeper into CPU territory that used to be dominated by Intel and AMD. Meta will launch its first large‑scale servers that use only Nvidia’s Grace CPUs and plans to roll out Vera CPU‑only systems by 2027, Yahoo Finance reported.
Those boxes will look more like traditional web‑scale servers but will be tuned for AI inference and efficiency, which could gradually shift some of Meta’s compute away from x86 processors.
This strategy “could pose challenges for Intel and AMD,” which now risks losing both GPU and CPU sockets as Nvidia expands, Yahoo Finance said.
How the market reacted, and what it signals
Short‑term stock reactions are not everything, but they can tell you how investors are reading the balance of power.
Nvidia’s stock rose about 1.9% in premarket trading after the deal, while Meta added roughly 0.7% and Broadcom slipped by about 0.1%, Barron’s reported. The paper said the agreement “should reduce concerns about demand for Nvidia’s data‑center chips,” especially after a period when some investors feared cloud customers might move more aggressively to custom silicon.
Nvidia’s shares “gained late in the session due to a deal with Meta in the AI sector,” Yahoo Finance wrote. The market viewed the partnership as evidence that Nvidia’s AI dominance “remains intact despite growing competition.”
For Broadcom, which has been pitching itself as an AI alternative with custom accelerators and networking silicon, this is a reminder that design wins at the largest customers are still hard to pry loose.
Broadcom is “closing the gap” on AI chips in some niches, Barron’s recently noted. However, the Meta pact shows that when a hyperscaler makes a multi‑cycle bet, Nvidia often remains the default.
I see that as a sign that the market still sees Nvidia as the safest way to capture AI infrastructure growth, even as it watches Broadcom, AMD, and others for upside surprise.
What this means if you own Nvidia or Meta
If you hold Nvidia, this is the sort of contract that helps you sleep a little better in a volatile market.
Bank of America recently lifted its Nvidia price target and said it expects strong demand for the Blackwell and GB200 data‑center platforms to drive multi‑year growth in Nvidia’s data‑center business, as summarized by TheStreet. A multiyear commitment from Meta to deploy millions of Nvidia chips and systems directly supports that view by tying a big slice of one hyperscaler’s AI capex to Nvidia’s roadmap.
The breadth of the deal also matters for Nvidia’s margin and competitive positioning. By delivering not just GPUs but also Grace and Vera CPUs and full Vera Rubin systems, Nvidia is selling more of the stack per rack. Meta will rely on Nvidia for “GPUs, CPUs, networking and full systems,” which means Nvidia can capture more dollars per watt and make it harder for competitors to swap in one piece at a time, CNBC highlighted.
For Meta investors, the story is more nuanced.
Meta’s stock has swung sharply as Wall Street tries to weigh the payoff from its massive AI spending. Meta had its worst day in three years in October, after outlining heavy AI capex, only to rebound in January on stronger sales guidance, CNBC reported.
Now, investors can see more clearly that a chunk of that capex is tied to scalable Nvidia infrastructure that can power Meta’s Llama models, recommendation systems, and future AI assistants.
At the same time, the partnership raises the bar for execution. If Meta can turn all this Nvidia horsepower into products people actually use and pay for, the spending will look like a smart, defensible moat. If not, shareholders will see tens of billions in capex and not enough incremental profit.
How to think about the Meta-Nvidia deal in your portfolio
As a retail investor, you do not need to model every chip in this deal to learn from it. There are a few clear takeaways you can use right away.
- Hyperscalers are choosing anchor chip partners.
- Nvidia is deepening its moat by moving up the stack.
- Rivals still have room, but the bar keeps rising.
If you are positioning around AI, I would think of Nvidia as the high‑beta infrastructure play whose fortunes are tied to hyperscaler capex, and Meta as a levered bet on whether all that infrastructure can translate into durable apps, ads, and AI services.
This Meta‑Nvidia deal tightens the connection between those stories. That is exactly why both stocks moved when “millions” of chips went from rumor to signed agreement.
Related: Meta makes major bet on AI secret weapon