Skip to content
  • by
  • News
  • 3 min read

Nvidia Leads in MLPerf Benchmarks: The Power of Grace Hopper Superchips and L4 GPUs

Nvidia has once again proven its dominance in the AI sector, as showcased in the latest MLPerf benchmarks. The company emerged as the leader across multiple categories, with its HGX H100 GPU systems delivering the highest throughput on every MLPerf inference test. These systems, equipped with eight H100 GPUs, outperformed the competition in computer vision, speech recognition, medical imaging, recommendation systems, and large language models.

One of the highlights of the benchmark results was the performance of the GH200 Grace Hopper Superchip. This superchip combines a Hopper GPU with a Grace CPU, offering superior memory, bandwidth, and power optimization capabilities. With a single shared CPU-GPU memory domain, data transfer between the CPU and GPU is unnecessary, resulting in significant performance gains.

The GH200 was also shown to outperform the Nvidia H100 SXM (a GH100 Hopper GPU) in various MLPerf test cases. Thanks to its larger memory capacity and greater memory bandwidth provided by 96 GB of HBM3, the GH200 enabled larger batch sizes for workloads. For example, RetinaNet and DLRMv2 achieved up to double the batch sizes in the Server scenario and 50% greater batch sizes in the Offline scenario.

Another noteworthy addition to the MLPerf benchmarks was the L4 GPU from Nvidia. This low-power (72W) GPU, based on the Ada Lovelace architecture, demonstrated impressive performance across different workloads. The L4 GPUs, available from known system builders and Google Cloud, offer a cost-effective and readily available solution for edge inference and general-purpose GPU computing.

In the face of GPU shortages, the L4 GPU presents an opportunity to find GPU cycles and accelerate processing. It outperforms CPUs in various applications such as molecular dynamics simulations and fusion physics, offering up to 46x faster performance compared to typical CPU nodes.

Overall, Nvidia's success in the MLPerf benchmarks, fueled by the power of Grace Hopper Superchips and L4 GPUs, ensures a continued demand for high-end products. However, there are still plenty of opportunities to harness the performance of GPUs at the low end of the market.

Sources: HPCwire

Definitions:

  • MLPerf: A benchmark suite for measuring the performance of machine learning systems.
  • Inference: The process of using a trained AI model to make predictions or decisions based on new data.
  • GPU: Graphics Processing Unit, a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images on a display device.
  • HBM3: High-Bandwidth Memory generation 3, a type of memory technology designed for high-performance GPUs.
  • Superchip: A high-performance integrated circuit that combines multiple components or functionalities.
  • L4 GPU: Nvidia's low-power GPU based on the Ada Lovelace architecture.