Nvidia Brings Open-Source Innovation to AI Factories at OCP 2025

Tags:

This week at the Open Compute Project (OCP) Global Summit, Nvidia shared its plans for what it calls “giga-scale artificial intelligence (AI) factories.”

These next-generation data centers are being built to handle massive AI workloads. With open standards, reduced power consumption, and scalable designs as the focus areas, Nvidia announced several innovations to the OCP community.

The news comes just a week after CEO Jensen Huang said that the demand for AI is “surging.”

Vera Rubin NVL144 for AI factories

During a press briefing for OCP, Nvidia unveiled the specs for its Vera Rubin NVL144 MGX open architecture rack and compute tray, designed for faster assembly, higher power capacity, and more efficient cooling in large-scale AI data centers.

The system uses a fully liquid-cooled design and replaces traditional cabling with a printed circuit midplane. Each tray includes modular expansion bays that support Nvidia’s ConnectX-9 800 Gbps networking and Rubin CPX GPUs

More than 50 MGX system and component partners are developing products based on the Vera Rubin architecture. Nvidia said it will contribute the new rack and compute tray designs as open standards to the OCP, expanding on the MGX framework that already includes the GB200 NVL72 and GB300 NVL72 systems.

Delivering power directly from grid to rack

Nvidia and its partners are transitioning to a new 800-volt direct current (VDC) architecture, which replaces legacy 415-volt alternating current (VAC) systems used in today’s data centers. The 415 VAC systems require multiple AC-to-DC conversions between the grid and the rack, which wastes power and limits the scalability of a data center. 

This change moves power conversion upstream and brings 800 VDC directly to the rack. Data centers will benefit from higher efficiency, lower material usage, and increased GPU density per facility. The design, already used in electric vehicles and solar power systems, supports giga-scale AI factories. The architecture is also a foundation for Kyber, Nvidia’s next-generation rack server platform and successor to Oberon.

Several companies are already adopting it. For example, Foxconn’s 40-megawatt Kaohsiung-1 data center in Taiwan is being built around 800 VDC to support Kyber. CoreWeave, Lambda, Nebius, Oracle, and Together AI are taking a similar approach.

The shift requires tight coordination across every layer of the stack. Nvidia is working with more than 20 industry partners to create a shared blueprint for this transition. This solution is ideally suited for the increasing compute demands of inferencing, generative video, and advanced software coding. 

Spectrum-X and Spectrum-XGS OCP alignment

Nvidia announced that Spectrum-XGS Ethernet, the latest addition to its Spectrum-X Ethernet platform, now supports OCP and SONiC standards.

Founded by Meta, OCP is an industry consortium that facilitates the sharing of data center product designs and best practices. SONiC is a Linux-based network operating system that runs on switches from multiple vendors. The update from Nvidia will allow organizations to build large, high-performance AI data centers using open-source hardware and software.

Spectrum-XGS, announced earlier this year, enables “scale across,” where multiple data centers across cities, states, or even continents can be connected and act as a single compute fabric. It uses algorithms that adapt the network to the distance between data centers, which helps reduce latency in distributed AI workloads. Meta and Oracle Cloud Infrastructure (OCI) are among the first to adopt the technology in OCP-based environments. HPE will also support Spectrum-XGS.

NVLink Fusion expands to Intel and Samsung Foundry

Nvidia’s NVLink Fusion ecosystem is expanding, providing partners with a way to integrate their CPUs and custom silicon directly into Nvidia’s infrastructure. These partners include Intel and Samsung Foundry. 

Intel will build x86 CPUs that connect to Nvidia platforms using NVLink Fusion. Samsung Foundry will provide design-to-manufacturing services for custom CPUs and specialized processors (XPUs) for AI workloads. Fujitsu is also integrating its Monaka-series CPUs with Nvidia GPUs using the same interface. Nvidia said these partnerships simplify system design and shorten development cycles, while accelerating time to market.

A unified, open approach to building AI factories

MGX is the company’s blueprint for modular, repeatable design. The same idea extends to networking, where Spectrum-XGS introduces open-source compatibility, allowing data center operators to integrate Nvidia products into OCP-based environments without requiring infrastructure overhauls.

While Nvidia has developed these technologies, it has always had a strong ecosystem to support its products. For example, the NVLink Fusion partnership with Intel aims to simplify the way CPUs and GPUs share workloads. Meanwhile, transitioning to the new 800 VDC architecture requires working closely with suppliers, vendors, and manufacturers.

Nvidia’s recent performance gains with the Blackwell platform further reflect its strategy. According to new benchmark results shared by Nvidia, Blackwell performs 15× faster than the earlier Hopper generation, thanks to deeper hardware and software co-design. Nvidia announced that additional software updates are forthcoming over the next six months, further expanding performance capabilities.

Ultimately, the recent announcements led to a typical architecture that scales across components and entire facilities. The OCP-specific announcements move the technology into the open source realm, enabling more organizations to leverage it.

Nvidia has been crystal clear that every new feature and partnership serves a greater purpose of making AI data centers easier to build, expand, and maintain over time — and that’s good for everyone.

In October, Nvidia also announced the release of DGX Spark, which is marketed as “the world’s smallest AI supercomputer.”

The post Nvidia Brings Open-Source Innovation to AI Factories at OCP 2025 appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *