Intel has once again demonstrated its upcoming Sapphire Rapids HBM Xeon Scalable CPUs with up to 64 GB HBM2e memory in various workloads.
Intel Promises 3x Performance Boost With Its Next-Gen Sapphire Rapids HBM 'Xeon Scalable' CPU Lineup
According to Intel, the Sapphire Rapids-SP will come in two package variants, a standard, and an HBM configuration. The standard variant will feature a chiplet design composed of four XCC dies that will feature a die size of around 400mm2. This is the die size for a singular XCC die and there will be four in total on the top Sapphire Rapids-SP Xeon chip. Each die will be interconnected via EMIB which has a pitch size of 55u and a core pitch of 100u.

The Intel Xeon processor code-named Sapphire Rapids with High Bandwidth Memory (HBM) is a great example of how we are leveraging advanced packaging technologies and silicon innovations to bring substantial performance, bandwidth, and power-saving improvements for HPC. With up to 64 gigabytes of high-bandwidth HBM2e memory in the package and accelerators integrated into the CPU, we’re able to unleash memory bandwidth-bound workloads while delivering significant performance improvements across key HPC use cases.
When comparing 3rd Gen Intel Xeon Scalable processors to the upcoming Sapphire Rapids HBM processors, we are seeing two- to three-times performance increases across weather research, energy, manufacturing, and physics workloads2. At the keynote, Ansys CTO Prith Banerjee also shows that Sapphire Rapids HBM delivers up to 2x performance increase on real-world workloads from Ansys Fluent and ParSeNet.

The standard Sapphire Rapids-SP Xeon chip will feature 10 EMIB interconnects and the entire package will measure at a mighty 4446mm2. Moving over to the HBM variant, we are getting an increased number of interconnects which sit at 14 and are needed to interconnect the HBM2E memory to the cores.

The four HBM2E memory packages will feature 8-Hi stacks so Intel is going for at least 16 GB of HBM2E memory per stack for a total of 64 GB across the Sapphire Rapids-SP package. Talking about the package, the HBM variant will measure at an insane 5700mm2 or 28% larger than the standard variant. Compared to the recently leaked EPYC Genoa numbers, the HBM2E package for Sapphire Rapids-SP would end up 5% larger while the standard package will be 22% smaller.
Intel Sapphire Rapids-SP Xeon (Standard Package) - 4446mm2Intel Sapphire Rapids-SP Xeon (HBM2E Package) - 5700mm2AMD EPYC Genoa (12 CCD Package) - 5428mm2
Intel also states that the EMIB link provides twice the bandwidth density improvement and 4 times better power efficiency compared to standard package designs. Interestingly, Intel calls the latest Xeon lineup Logically monolithic which means that they are referring to the interconnect that'll offer the same functionality as a single-die would but technically, there are four chiplets that will be interconnected together. You can read the full details regarding the standard 56 core & 112 thread Sapphire Rapids-SP Xeon CPUs here.
Intel Xeon CPU Families (Preliminary):
| Family Branding | Diamond Rapids | Clearwater Forest | Granite Rapids | Sierra Forest | Emerald Rapids | Sapphire Rapids | Ice Lake-SP | Cooper Lake-SP | Cascade Lake-SP/AP | Skylake-SP |
|---|---|---|---|---|---|---|---|---|---|---|
| Process Node | Intel 20A? | Intel 18A | Intel 3 | Intel 3 | Intel 7 | Intel 7 | 10nm+ | 14nm++ | 14nm++ | 14nm+ |
| Platform Name | Intel Mountain Stream Intel Birch Stream | Intel Mountain Stream Intel Birch Stream | Intel Mountain Stream Intel Birch Stream | Intel Mountain Stream Intel Birch Stream | Intel Eagle Stream | Intel Eagle Stream | Intel Whitley | Intel Cedar Island | Intel Purley | Intel Purley |
| Core Architecture | Lion Cove? | Crestmont+ | Redwood Cove | Sierra Glen | Raptor Cove | Golden Cove | Sunny Cove | Cascade Lake | Cascade Lake | Skylake |
| MCP (Multi-Chip Package) SKUs | Yes | TBD | Yes | Yes | Yes | Yes | No | No | Yes | No |
| Socket | LGA 4677 / 7529 | LGA 4677 / 7529 | LGA 4677 / 7529 | LGA 4677 / 7529 | LGA 4677 | LGA 4677 | LGA 4189 | LGA 4189 | LGA 3647 | LGA 3647 |
| Max Core Count | Up To 144? | Up To 288 | Up To 136? | Up To 288 | Up To 64? | Up To 56 | Up To 40 | Up To 28 | Up To 28 | Up To 28 |
| Max Thread Count | Up To 288? | Up To 288 | Up To 272? | Up To 288 | Up To 128 | Up To 112 | Up To 80 | Up To 56 | Up To 56 | Up To 56 |
| Max L3 Cache | TBD | TBD | TBD | 108 MB L3 | 320 MB L3 | 105 MB L3 | 60 MB L3 | 38.5 MB L3 | 38.5 MB L3 | 38.5 MB L3 |
| Memory Support | Up To 12-Channel DDR6-7200? | TBD | Up To 12-Channel DDR5-6400 | Up To 8-Channel DDR5-6400? | Up To 8-Channel DDR5-5600 | Up To 8-Channel DDR5-4800 | Up To 8-Channel DDR4-3200 | Up To 6-Channel DDR4-3200 | DDR4-2933 6-Channel | DDR4-2666 6-Channel |
| PCIe Gen Support | PCIe 6.0 (128 Lanes)? | TBD | PCIe 5.0 (136 Lanes) | PCIe 5.0 (TBD Lanes) | PCIe 5.0 (80 Lanes) | PCIe 5.0 (80 lanes) | PCIe 4.0 (64 Lanes) | PCIe 3.0 (48 Lanes) | PCIe 3.0 (48 Lanes) | PCIe 3.0 (48 Lanes) |
| TDP Range (PL1) | Up To 500W? | TBD | Up To 500W | Up To 350W | Up To 350W | Up To 350W | 105-270W | 150W-250W | 165W-205W | 140W-205W |
| 3D Xpoint Optane DIMM | Donahue Pass? | TBD | Donahue Pass | TBD | Crow Pass | Crow Pass | Barlow Pass | Barlow Pass | Apache Pass | N/A |
| Competition | AMD EPYC Venice | AMD EPYC Zen 5C | AMD EPYC Turin | AMD EPYC Bergamo | AMD EPYC Genoa ~5nm | AMD EPYC Genoa ~5nm | AMD EPYC Milan 7nm+ | AMD EPYC Rome 7nm | AMD EPYC Rome 7nm | AMD EPYC Naples 14nm |
| Launch | 2025? | 2025 | 2024 | 2024 | 2023 | 2022 | 2021 | 2020 | 2018 | 2017 |









