Supermicro Introduces NVIDIA GH200-Based Server Platform for AI Workloads

Supermicro Introduces NVIDIA GH200-Based Server Platform for AI Workloads

Supermicro Introduces NVIDIA GH200-Based Server Platform for AI Workloads

Supermicro, Inc., a leading provider of AI, Cloud, Storage, and 5G/Edge IT solutions, has unveiled its latest lineup of GPU systems based on NVIDIA’s reference architecture. These systems are powered by the NVIDIA GH200 Grace Hopper Superchip, which offers enhanced performance for AI workloads. The modular architecture of these systems provides flexibility and scalability, allowing for future expansion of GPUs, DPUs, and CPUs.

One of the key features of Supermicro’s new server platform is its advanced liquid-cooling technology, which enables high-density configurations and improved efficiency. This includes a 1U 2-node configuration that integrates two NVIDIA GH200 Grace Hopper Superchips with a high-speed interconnect. This design allows for thousands of rack-scale AI servers to be delivered per month, ensuring plug-and-play compatibility.

The collaboration between Supermicro and NVIDIA aims to accelerate the adoption of AI-enabled applications by offering highly modular and scalable systems. The new servers incorporate the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots.

Supermicro’s NVIDIA MGX platforms are designed to address the unique thermal, power, and mechanical challenges of AI-based servers. The new product line includes a range of servers that accommodate future AI technologies, such as the ARS-111GL-NHR, ARS-111GL-NHR-LCC, and ARS-111GL-DHNR-LCC models. These servers feature one or two NVIDIA GH200 Grace Hopper Superchips and can be enhanced with NVIDIA BlueField-3 DPU and/or NVIDIA ConnectX-7 interconnects for high-performance networking.

Technical specifications of Supermicro’s MGX systems include up to 2 NVIDIA GH200 Grace Hopper Superchips with NVIDIA H100 GPUs, NVIDIA Grace CPUs, and LPDDR5X and HBM3(e) memory. The modular architecture supports multiple PCIe 5.0 x16 slots for DPUs, additional GPUs, networking, and storage. Supermicro also offers NVIDIA networking solutions for secure and accelerated AI workloads.

The partnership between Supermicro and NVIDIA aims to bring highly optimized AI systems to market quickly and efficiently. These systems are designed to meet the evolving needs of AI technologies and provide customers with the performance and scalability required for their AI workloads.

Sources:

Supermicro’s NVIDIA GH200 Superchip-Based Server Platform Increases AI Workload Performance Using a Tightly Integrated CPU and GPU and Incorporates the Latest DPU Networking and Communication Technologies – Press Release

Supermicro’s NVIDIA MGX Platform Overview – Supermicro