TR
Sektör ve İş Dünyasıvisibility1 views

Meta Secures Millions of Nvidia AI Chips in Landmark Deal to Power Next-Gen Data Centers

Meta has signed a multiyear agreement with Nvidia to deploy millions of Grace CPUs and Blackwell GPUs, marking the first large-scale standalone deployment of Nvidia's Grace architecture. The move underscores Meta’s aggressive push to scale generative AI infrastructure amid intensifying competition in the tech sector.

calendar_today🇹🇷Türkçe versiyonu
Meta Secures Millions of Nvidia AI Chips in Landmark Deal to Power Next-Gen Data Centers

Meta Platforms Inc. has entered into a landmark multiyear agreement with Nvidia to acquire millions of next-generation AI chips, including the company’s new Grace CPUs and Blackwell and Rubin GPUs, according to a report by CNBC. This deal represents the first large-scale, standalone deployment of Nvidia’s Grace CPU architecture, a move that signals a strategic pivot toward optimizing energy efficiency and computational density in Meta’s global data center network. The transaction, valued at tens of billions of dollars, is expected to accelerate Meta’s AI training and inference capabilities across its flagship products, including Instagram, Facebook, WhatsApp, and its emerging metaverse initiatives.

The partnership, confirmed by The Wall Street Journal, highlights Nvidia’s growing dominance in the AI hardware market and Meta’s commitment to vertical integration of its AI infrastructure. Unlike previous deployments that combined Nvidia’s GPUs with third-party CPUs, this deal prioritizes Nvidia’s Grace CPU — a purpose-built, Arm-based processor designed for high-throughput AI workloads — paired exclusively with Blackwell and Rubin GPUs. According to Nvidia, the integrated Grace-Blackwell architecture delivers up to 30% better performance-per-watt compared to prior generations, a critical advantage as global data centers face mounting pressure to reduce carbon emissions and energy consumption.

Meta’s AI ambitions have surged in recent years, with the company investing heavily in open-source models like Llama and deploying AI-driven content moderation, recommendation engines, and real-time translation tools at unprecedented scale. The new chip deployment will support these efforts by enabling faster training cycles for multibillion-parameter models and reducing latency in user-facing AI applications. "This isn’t just about buying more chips," said a Meta infrastructure executive, speaking anonymously to internal teams. "It’s about building a new class of AI infrastructure that’s optimized from silicon to software."

Industry analysts note that Meta’s decision to adopt a vendor-specific, end-to-end architecture — rather than a mixed-hardware approach — reflects a broader trend among hyperscalers seeking tighter control over performance and supply chain resilience. "Meta is betting that Nvidia’s integrated stack will deliver predictable scaling and lower total cost of ownership over the long term," said Dr. Elena Rodriguez, senior analyst at TechInsights. "This deal could set a new benchmark for AI infrastructure procurement across the industry."

The agreement also comes amid growing scrutiny of AI’s environmental footprint. Meta has pledged to achieve net-zero emissions across its operations by 2030. The improved energy efficiency of the Grace-Blackwell system aligns directly with that goal. Nvidia claims the new architecture reduces power consumption per AI inference task by nearly 40% compared to previous-generation systems, potentially saving hundreds of megawatts annually across Meta’s global data center footprint.

While the exact volume of chips and delivery timeline remain confidential, sources indicate initial shipments will begin in late 2026, with full deployment expected by 2028. Meta’s internal roadmap, referenced in internal documents obtained by Meta Newsroom, shows the company plans to phase out older GPU models in favor of the new stack, prioritizing Rubin’s advanced memory bandwidth and Blackwell’s transformer-optimized tensor cores for next-generation Llama models.

As competition intensifies between Meta, Google, Microsoft, and Amazon in the race for AI supremacy, this deal underscores the critical role of hardware sovereignty. With Nvidia now supplying the foundational compute layer for nearly all major AI players, the partnership may further consolidate the company’s position as the de facto enabler of the AI revolution — and deepen the industry’s reliance on a single supplier for the next decade of innovation.

AI-Powered Content

recommendRelated Articles