yitit
Home
/
Hardware
/
Intel Details Ponte Vecchio GPU & Sapphire Rapids HBM Performance, Up To 2.5x Faster Than NVIDIA A100
Intel Details Ponte Vecchio GPU & Sapphire Rapids HBM Performance, Up To 2.5x Faster Than NVIDIA A100-October 2024
Oct 27, 2025 12:53 PM

During Hot Chips 34, Intel once again detailed its Ponte Vecchio GPUs running on a Sapphire Rapids HBM server platform.

Intel Shows off Ponte Vecchio 2-Stack GPU & Sapphire Rapids HBM CPU Performance Against NVIDIA's A100

In the presentation by Intel Fellow & Chief GPU Compute Architect, Hong Jiang, we get some more details regarding the upcoming server powerhouses from the blue team. The Ponte Vecchio GPU comes in three configurations starting with a singular OAM and ranging up to an x4 Subsystem with Xe Links, either running solo or with a dual-socket Sapphire Rapids platform.

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_1

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_2

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_3

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_4

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_5

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_6

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_7

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_8

2 of 9

The OAM supports all-to-all topologies for both 4 GPU and 8 GPU platforms. Complementing the entire platform is Intel's oneAPI software stack which is a Level-Zero API that provides a low-level hardware interface to support cross-architecture programming. Some of the main features of the oneAPI include:

Interface for oneAPI and other tools to accelerator devicesFine gain control and low-latency to accelerator capabilitiesMulti-Threaded DesignFor GPUs, ships as a part of the driver

So coming to the performance metrics, a 2-Stack Ponte Vecchio GPU configuration like the one featured on a singular OAM is capable of delivering up to 52 TFLOPs of FP64/FP32 compute, 419 TFLOPs of TF32 (XMX Float 32), 839 TFLOPs of BF16/FP16 and 1678 TFLOPs of INT8 horsepower.

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_9

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_10

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_11

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_12

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_13

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_14

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_15

2 of 9

Intel also details its maximum cache sizes and the peak bandwidth offered by each of them. The Register File size on Ponte Vecchio GPU is 64 MB and offers 419 TB/s of bandwidth, the L1 cache also comes in at 64 MB and offers 105 TB/s (4:1), and the L2 cache comes in at 408 MB and offers 13 TB/s bandwidth (8:1) while the HBM memory pools up to 128 GB and offers 4.2 TB/s bandwidth (4:1). There is a range of compute efficiency techniques within Ponte Vecchio such as:

Register File:

Register CachingAccumulators

L1/L2 Cache:

Write ThroughWrite BackWrite StreamingUncached

Prefetch:

Software (instruction) prefetch to L1 and/ or L2Command Streamer prefetch to L2 for instruction and data

Intel explains that the larger L2 cache can deliver some huge gains in workloads such as 2D-FFT Case and DNN Case. Some performance comparisons between a full Ponte Vecchio GPU and a module down-configured to 80 MB and 32 MB have been shown.

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_16

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_17

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_18

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_19

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_20

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_21

intel-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-hot-chips-34_22

2 of 9

But that's not all, Intel also has performance comparisons between the NVIDIA Ampere A100 running CUDA and SYCL against its own Ponte Vecchio GPUs using SYCL. In miniBUDE, which is a computational workload that can predict the binding energy of the ligand with the target, the Ponte Vecchio GPU simulates the test results 2 times faster than Ampere A100. There's another performance metric in ExaSMR (Small Modular Reactors for large nuclear reactor designs). here, the Intel GPU is shown to offer a 1.5x performance lead over the NVIDIA GPU.

It is a bit interesting that Intel is still comparing its Ponte Vecchio GPUs to Ampere A100 because the green team has since launched its next-gen Hopper H100 to the market and it's already been shipping to customers. If Chipzilla feels so confident within its 2-2.5x performance figures, then I don't think it will have any trouble competing well with Hopper unless otherwise.

Here's Everything We Know About The Intel 7 Powered Ponte Vecchio GPUs

Moving over to the Ponte Vecchio specsIntel outlined some key features of its flagship data center GPU such as 128 Xe cores, 128 RT units, HBM2e memory, and a total of 8 Xe-HPC GPUs that will be connected together. The chip will feature up to 408 MB of L2 cache in two separate stacks that will connect via the EMIB interconnect. The chip will feature multiple dies based on Intel's own 'Intel 7' process and TSMC's N7 / N5 process nodes.

Intel also previously detailed the package and die size of its flagship Ponte Vecchio GPU based on the Xe-HPC architecture. The chip will consist of 2 tiles with 16 active dies per stack. The maximum active top die size is going to be 41mm2 while the base die size which is also referred to as the 'Compute Tile' sits at 650mm2. We have all the chiplets and process nodes that the Ponte Vecchio GPUs will utilize, listed below:

Intel 7nmTSMC 7nmFoveros 3D PackagingEMIB10nm Enhanced Super FinRambo CacheHBM2

Following is how Intel gets to 47 tiles on the Ponte Vecchio chip:

16 Xe HPC (internal/external)8 Rambo (internal)2 Xe Base (internal)11 EMIB (internal)2 Xe Link (external)8 HBM (external)

The Ponte Vecchio GPU makes use of 8 HBM 8-Hi stacks and contains a total of 11 EMIB interconnects. The whole Intel Ponte Vecchio package would measure 4843.75mm2. It is also mentioned that the bump pitch for Meteor Lake CPUs using High-Density 3D Forveros packaging will be 36u.

The Ponte Vecchio GPU is not 1 chip but a combination of several chips. It's a chiplet powerhouse, packing the most chiplets on any GPU/CPU out there, 47 to be precise. And these are not based on just one process node but several process nodes as we had detailed just a few days back.

Although the Aurora Supercomputer in which the Ponte Vecchio GPUs and Sapphire Rapids CPUs were to be used has been pushed back due to several delays by the blue team, it is still good to see the company offering more details. Intel has since teased its next-generation Rialto Bridge GPU as the successor to the Ponte Vecchio GPUs and is said to begin sampling in 2023. You can read more details on that here.

Next-Gen Data Center GPU Accelerators

GPU NameAMD Instinct MI250XNVIDIA Hopper GH100Intel Ponte VecchioIntel Rialto Bridge
Packaging DesignMCM (Infinity Fabric)MonolithicMCM (EMIB + Foveros)MCM (EMIB + Foveros)
GPU ArchitectureAldebaran (CDNA 2)Hopper GH100Xe-HPCXe-HPC
GPU Process Node6nm4N7nm (Intel 4)5nm (Intel 3)?
GPU Cores14,08016,89616,384 ALUs
(128 Xe Cores)
20,480 ALUs
(160 Xe Cores)
GPU Clock Speed1700 MHz~1780 MHzTBATBA
L2 / L3 Cache2 x 8 MB50 MB2 x 204 MBTBA
FP16 Compute383 TOPs2000 TFLOPsTBATBA
FP32 Compute95.7 TFLOPs1000 TFLOPs~45 TFLOPs (A0 Silicon)TBA
FP64 Compute47.9 TFLOPs60 TFLOPsTBATBA
Memory Capacity128 GB HBM2E80 GB HBM3128 GB HBM2e128 GB HBM3?
Memory Clock3.2 Gbps3.2 GbpsTBATBA
Memory Bus8192-bit5120-bit8192-bit8192-bit
Memory Bandwidth3.2 TB/s3.0 TB/s~3 TB/s~3 TB/s
Form FactorOAMOAMOAMOAM v2
CoolingPassive Cooling
Liquid Cooling
Passive Cooling
Liquid Cooling
Passive Cooling
Liquid Cooling
Passive Cooling
Liquid Cooling
TDP560W700W600W800W
LaunchQ4 20212H 20222022?2024?

Comments
Welcome to yitit comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Login to display more comments
Hardware
Recent News
Copyright 2023-2025 - www.yitit.com All Rights Reserved