yitit
Home
/
Hardware
/
NVIDIA Clean Sweeps MLPerf AI Benchmarks With Hopper H100 GPU, Up To 4.5x Performance Uplift Over Ampere A100
NVIDIA Clean Sweeps MLPerf AI Benchmarks With Hopper H100 GPU, Up To 4.5x Performance Uplift Over Ampere A100-February 2024
Feb 16, 2026 9:06 AM

NVIDIA's Hopper H100 GPU has made its debut on the MLPerf AI Benchmark list and shattered all previous records achieved by Ampere A100. While Hopper Tensor Core GPUs pave the way for the next big AI revolution, the Ampere A100 GPUs continue to showcase leadership performance in the mainstream AI application suite while Jetson AGX Orin leads in edge computing.

NVIDIA's AI Revolution Continues With Hopper H100 Tensor Core GPU Shattering All MLPerf Benchmarks, Delivering Up To 4.5x Performance Uplift Versus Last-Gen

Press Release: In their debut on theMLPerfindustry-standard AI benchmarks,NVIDIA H100 Tensor Core GPUs set world records in inference on all workloads, delivering up to 4.5x more performance than previous-generation GPUs. The results demonstrate that Hopper is the premium choice for users who demand utmost performance on advanced AI models.

Offline scenario for data center and edge (Single GPU)

NVIDIA H100
(Inferences/Second)
NVIDIA A100
(Inferences/Second)
NVIDIA Jetson AGX Orin
(Max Inferences/Query)
DLRM
(Recommender)
695,298 314,992 N/A*
BERT
(Natural Language Processing)**
7,921 1,757 558
ResNet-50 v1.5
(Image Classification)
81,292 41,893 6164
RetinaNet
(Object Detection)
960 592 60
RNN-T
(Speech Recognition)
22,885 13,341 1149
3D U-Net
(Medical Imaging)
5 3 0.5

Additionally,NVIDIA A100 Tensor Core GPUsand theNVIDIA Jetson AGX Orin module for AI-powered robotics continued to deliver overall leadership inference performance across all MLPerf tests: image and speech recognition, natural language processing, and recommender systems.

The H100, aka Hopper, raised the bar in per-accelerator performance across all six neural networks in the round. It demonstrated leadership in both throughput and speed in a separate server and offline scenarios. The NVIDIA Hopper architecture delivered up to 4.5x more performance than NVIDIA Ampere architecture GPUs, which continue to provide overall leadership in MLPerf results.

NVIDIA Clean Sweeps MLPerf AI Benchmarks With Hopper H100 GPU, Up To 4.5x Performance Uplift Over Ampere A100 3

Thanks in part to itsTransformer Engine, Hopper excelled on the popular BERT model for natural language processing. It’s among the largest and most performance-hungry of the MLPerf AI models. These inference benchmarks mark the first public demonstration of H100 GPUs, which will be available later this year. The H100 GPUs will participate in future MLPerf rounds for training.

A100 GPUs Show Leadership

NVIDIA A100 GPUs, available today from major cloud service providers and systems manufacturers, continued to show overall leadership in mainstream performance on AI inference in the latest tests. A100 GPUs won more tests than any submission in data center and edge computing categories and scenarios. InJune, the A100 also delivered overall leadership in MLPerf training benchmarks, demonstrating its abilities across the AI workflow.

Meta Unveils The AI Research SuperCluster Supercomputer, Powered By NVIDIA's A100 GPU & Packs 220 PFLOPs Horsepower 2

A featured image of the NVIDIA GA100 die.

Since their July 2020debuton MLPerf, A100 GPUshave advanced their performance by 6x, thanks to continuous improvements in NVIDIA AI software. NVIDIA AI is the only platform to run all MLPerf inference workloads and scenarios in data center and edge computing.

Users Need Versatile Performance

The ability of NVIDIA GPUs to deliver leadership performance on all major AI models makes users the real winners. Their real-world applications typically employ many neural networks of different kinds.

For example, an AI application may need to understand a user’s spoken request, classify an image, make a recommendation, and then deliver a response as a spoken message in a human-sounding voice. Each step requires a different type of AI model.

The MLPerf benchmarks cover these and other popular AI workloads and scenarios — computer vision, natural language processing, recommendation systems, speech recognition, and more. The tests ensure users will get performance that’s dependable and flexible to deploy.

Users rely on MLPerf results to make informed buying decisions because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Amazon, Arm, Baidu, Google, Harvard, Intel, Meta, Microsoft, Stanford, and the University of Toronto.

Orin Leads at the Edge

In edge computing, NVIDIA Orin ran every MLPerf benchmark, winning more tests than any other low-power system-on-a-chip. And it showed up to a 50% gain in energy efficiency compared to its debut on MLPerf in April. In the previous round, Orin ran up to 5x faster than the prior-generation Jetson AGX Xavier module, while delivering an average of 2x better energy efficiency.

NVIDIA Clean Sweeps MLPerf AI Benchmarks With Hopper H100 GPU, Up To 4.5x Performance Uplift Over Ampere A100 4

Orin integrates into a single chip anNVIDIA Ampere architectureGPU and a cluster of powerful Arm CPU cores. It’s available today in theNVIDIA Jetson AGX Orin developer kit and production modules for robotics and autonomous systems and supports the full NVIDIA AI software stack, including platforms for autonomous vehicles (NVIDIA Hyperion), medical devices (Clara Holoscan), and robotics (Isaac).

Comments
Welcome to yitit comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Login to display more comments
Hardware
Recent News
Copyright 2023-2026 - www.yitit.com All Rights Reserved