{"id":5364,"date":"2025-10-14T17:12:18","date_gmt":"2025-10-14T17:12:18","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5364"},"modified":"2025-10-14T17:12:18","modified_gmt":"2025-10-14T17:12:18","slug":"nvidia-brings-open-source-innovation-to-ai-factories-at-ocp-2025","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5364","title":{"rendered":"Nvidia Brings Open-Source Innovation to AI Factories at OCP 2025"},"content":{"rendered":"<p>This week at the Open Compute Project (OCP) Global Summit, Nvidia shared its plans for what it calls \u201cgiga-scale artificial intelligence (AI) factories.\u201d<\/p>\n<p>These next-generation data centers are being built to handle massive AI workloads. With open standards, reduced power consumption, and scalable designs as the focus areas, Nvidia announced several innovations to the OCP community.<\/p>\n<p>The news comes just a week after CEO Jensen Huang said that <a href=\"https:\/\/www.eweek.com\/news\/nvidia-huang-ai-demand-surging\/\">the demand for AI is \u201csurging.\u201d<\/a><\/p>\n<h2 class=\"wp-block-heading\">Vera Rubin NVL144 for AI factories<\/h2>\n<p>During a press briefing for OCP, <a href=\"https:\/\/www.eweek.com\/news\/nvidia-jetson-thor\/\">Nvidia<\/a> unveiled the specs for its Vera Rubin NVL144 MGX open architecture rack and compute tray, designed for faster assembly, higher power capacity, and more efficient cooling in large-scale AI data centers.<\/p>\n<p>The system uses a fully liquid-cooled design and replaces traditional cabling with a printed circuit midplane. Each tray includes modular expansion bays that support Nvidia\u2019s ConnectX-9 800 Gbps networking and <a href=\"https:\/\/www.techrepublic.com\/article\/news-nvidia-vera-rubin-cpx\/\" target=\"_blank\" rel=\"noopener\">Rubin CPX GPUs<\/a>.\u00a0<\/p>\n<p>More than 50 MGX system and component partners are developing products based on the Vera Rubin architecture. Nvidia said it will contribute the new rack and compute tray designs as open standards to the <a href=\"https:\/\/www.opencompute.org\/summit\/global-summit\" target=\"_blank\" rel=\"noopener\">OCP<\/a>, expanding on the MGX framework that already includes the GB200 NVL72 and GB300 NVL72 systems.<\/p>\n<h2 class=\"wp-block-heading\">Delivering power directly from grid to rack<\/h2>\n<p><a href=\"https:\/\/www.techrepublic.com\/article\/news-openai-nvidia-data-center-deal\/\" target=\"_blank\" rel=\"noopener\">Nvidia and its partners<\/a> are transitioning to a new 800-volt direct current (VDC) architecture, which replaces legacy 415-volt alternating current (VAC) systems used in today\u2019s data centers. The 415 VAC systems require multiple AC-to-DC conversions between the grid and the rack, which wastes power and limits the scalability of a data center.\u00a0<\/p>\n<p>This change moves power conversion upstream and brings 800 VDC directly to the rack. Data centers will benefit from higher efficiency, lower material usage, and increased GPU density per facility. The design, already used in electric vehicles and solar power systems, supports giga-scale AI factories. The architecture is also a foundation for Kyber, Nvidia\u2019s next-generation rack server platform and successor to Oberon.<\/p>\n<p>Several companies are already adopting it. For example, Foxconn\u2019s 40-megawatt Kaohsiung-1 data center in Taiwan is being built around 800 VDC to support Kyber. CoreWeave, Lambda, Nebius, <a href=\"https:\/\/www.techrepublic.com\/article\/news-oracle-deploy-50k-amd-ai-chips\/\" target=\"_blank\" rel=\"noopener\">Oracle<\/a>, and Together AI are taking a similar approach.<\/p>\n<p>The shift requires tight coordination across every layer of the stack. Nvidia is working with more than 20 industry partners to create a shared blueprint for this transition. This solution is ideally suited for the increasing compute demands of inferencing, generative video, and advanced software coding.\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Spectrum-X and Spectrum-XGS OCP alignment<\/h2>\n<p>Nvidia announced that Spectrum-XGS Ethernet, the latest addition to its Spectrum-X Ethernet platform, now supports OCP and SONiC standards.<\/p>\n<p>Founded by Meta, OCP is an industry consortium that facilitates the sharing of data center product designs and best practices. SONiC is a Linux-based network operating system that runs on switches from multiple vendors. The update from Nvidia will allow organizations to build large, high-performance AI data centers using open-source hardware and software.<\/p>\n<p>Spectrum-XGS, announced earlier this year, enables \u201cscale across,\u201d where multiple data centers across cities, states, or even continents can be connected and act as a single compute fabric. It uses algorithms that adapt the network to the distance between data centers, which helps reduce latency in distributed AI workloads. Meta and Oracle Cloud Infrastructure (OCI) are among the first to adopt the technology in OCP-based environments. HPE will also support Spectrum-XGS.<\/p>\n<h2 class=\"wp-block-heading\">NVLink Fusion expands to Intel and Samsung Foundry<\/h2>\n<p>Nvidia\u2019s NVLink Fusion ecosystem is expanding, providing partners with a way to integrate their CPUs and custom silicon directly into Nvidia\u2019s infrastructure. These partners include Intel and Samsung Foundry.\u00a0<\/p>\n<p>Intel will build x86 CPUs that connect to Nvidia platforms using NVLink Fusion. Samsung Foundry will provide design-to-manufacturing services for custom CPUs and specialized processors (XPUs) for AI workloads. Fujitsu is also integrating its Monaka-series CPUs with Nvidia GPUs using the same interface. Nvidia said these partnerships simplify system design and shorten development cycles, while accelerating time to market.<\/p>\n<h2 class=\"wp-block-heading\">A unified, open approach to building AI factories<\/h2>\n<p>MGX is the company\u2019s blueprint for modular, repeatable design. The same idea extends to networking, where Spectrum-XGS introduces open-source compatibility, allowing data center operators to integrate Nvidia products into OCP-based environments without requiring infrastructure overhauls.<\/p>\n<p>While <a href=\"https:\/\/blogs.nvidia.com\/blog\/ai-energy-innovation-climate-research\/\" target=\"_blank\" rel=\"noopener\">Nvidia<\/a> has developed these technologies, it has always had a strong ecosystem to support its products. For example, the NVLink Fusion partnership with Intel aims to simplify the way CPUs and GPUs share workloads. Meanwhile, transitioning to the new 800 VDC architecture requires working closely with suppliers, vendors, and manufacturers.<\/p>\n<p>Nvidia\u2019s recent performance gains <a href=\"https:\/\/www.techrepublic.com\/article\/nvidia-blackwell-gpus-sold-out-demand-surges\/\" target=\"_blank\" rel=\"noopener\">with the Blackwell platform<\/a> further reflect its strategy. According to new benchmark results shared by Nvidia, Blackwell performs 15\u00d7 faster than the earlier Hopper generation, thanks to deeper hardware and software co-design. Nvidia announced that additional software updates are forthcoming over the next six months, further expanding performance capabilities.<\/p>\n<p>Ultimately, the recent announcements led to a typical architecture that scales across components and entire facilities. The OCP-specific announcements move the technology into the open source realm, enabling more organizations to leverage it.<\/p>\n<p>Nvidia has been crystal clear that every new feature and partnership serves a greater purpose of making AI data centers easier to build, expand, and maintain over time \u2014 and that\u2019s good for everyone.<\/p>\n<p><strong>In October, Nvidia also announced the release of DGX Spark, which is marketed as \u201c<\/strong><a href=\"https:\/\/www.techrepublic.com\/article\/news-nvidia-ai-supercomputer\/\" target=\"_blank\" rel=\"noopener\"><strong>the world\u2019s smallest AI supercomputer<\/strong><\/a><strong>.\u201d<\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/news-nvidia-at-ocp\/\">Nvidia Brings Open-Source Innovation to AI Factories at OCP 2025<\/a> appeared first on <a href=\"https:\/\/www.eweek.com\/\">eWEEK<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>This week at the Open Compute Project (OCP) Global Summit, Nvidia shared its plans for what it calls \u201cgiga-scale artificial intelligence (AI) factories.\u201d These next-generation data centers are being built to handle massive AI workloads. With open standards, reduced power consumption, and scalable designs as the focus areas, Nvidia announced several innovations to the OCP [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-5364","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5364"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5364"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5364\/revisions"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5364"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5364"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5364"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}