yitit
Home
/
Hardware
/
Intel’s Sapphire Rapids HBM ‘Xeon Scalable’ CPUs With 64 GB HBM2e Memory Offer Up To 3x Performance Increase Over Ice Lake Xeons
Intel’s Sapphire Rapids HBM ‘Xeon Scalable’ CPUs With 64 GB HBM2e Memory Offer Up To 3x Performance Increase Over Ice Lake Xeons-February 2024
Feb 12, 2026 7:44 PM

Intel has once again demonstrated its upcoming Sapphire Rapids HBM Xeon Scalable CPUs with up to 64 GB HBM2e memory in various workloads.

Intel Promises 3x Performance Boost With Its Next-Gen Sapphire Rapids HBM 'Xeon Scalable' CPU Lineup

According to Intel, the Sapphire Rapids-SP will come in two package variants, a standard, and an HBM configuration. The standard variant will feature a chiplet design composed of four XCC dies that will feature a die size of around 400mm2. This is the die size for a singular XCC die and there will be four in total on the top Sapphire Rapids-SP Xeon chip. Each die will be interconnected via EMIB which has a pitch size of 55u and a core pitch of 100u.

Intel Promises 3x Performance Boost With Its Next-Gen Sapphire Rapids HBM 'Xeon Scalable' CPU Lineup 2

The Intel Xeon processor code-named Sapphire Rapids with High Bandwidth Memory (HBM) is a great example of how we are leveraging advanced packaging technologies and silicon innovations to bring substantial performance, bandwidth, and power-saving improvements for HPC. With up to 64 gigabytes of high-bandwidth HBM2e memory in the package and accelerators integrated into the CPU, we’re able to unleash memory bandwidth-bound workloads while delivering significant performance improvements across key HPC use cases.

When comparing 3rd Gen Intel Xeon Scalable processors to the upcoming Sapphire Rapids HBM processors, we are seeing two- to three-times performance increases across weather research, energy, manufacturing, and physics workloads2. At the keynote, Ansys CTO Prith Banerjee also shows that Sapphire Rapids HBM delivers up to 2x performance increase on real-world workloads from Ansys Fluent and ParSeNet.

The standard Sapphire Rapids-SP Xeon chip will feature 10 EMIB interconnects and the entire package will measure at a mighty 4446mm2. Moving over to the HBM variant, we are getting an increased number of interconnects which sit at 14 and are needed to interconnect the HBM2E memory to the cores.

Intel Promises 3x Performance Boost With Its Next-Gen Sapphire Rapids HBM 'Xeon Scalable' CPU Lineup 3

The four HBM2E memory packages will feature 8-Hi stacks so Intel is going for at least 16 GB of HBM2E memory per stack for a total of 64 GB across the Sapphire Rapids-SP package. Talking about the package, the HBM variant will measure at an insane 5700mm2 or 28% larger than the standard variant. Compared to the recently leaked EPYC Genoa numbers, the HBM2E package for Sapphire Rapids-SP would end up 5% larger while the standard package will be 22% smaller.

Intel Sapphire Rapids-SP Xeon (Standard Package) - 4446mm2Intel Sapphire Rapids-SP Xeon (HBM2E Package) - 5700mm2AMD EPYC Genoa (12 CCD Package) - 5428mm2

Intel also states that the EMIB link provides twice the bandwidth density improvement and 4 times better power efficiency compared to standard package designs. Interestingly, Intel calls the latest Xeon lineup Logically monolithic which means that they are referring to the interconnect that'll offer the same functionality as a single-die would but technically, there are four chiplets that will be interconnected together. You can read the full details regarding the standard 56 core & 112 thread Sapphire Rapids-SP Xeon CPUs here.

Intel Xeon CPU Families (Preliminary):

Family BrandingDiamond RapidsClearwater ForestGranite RapidsSierra ForestEmerald RapidsSapphire RapidsIce Lake-SPCooper Lake-SPCascade Lake-SP/APSkylake-SP
Process NodeIntel 20A?Intel 18AIntel 3Intel 3Intel 7Intel 710nm+14nm++14nm++14nm+
Platform NameIntel Mountain Stream
Intel Birch Stream
Intel Mountain Stream
Intel Birch Stream
Intel Mountain Stream
Intel Birch Stream
Intel Mountain Stream
Intel Birch Stream
Intel Eagle StreamIntel Eagle StreamIntel WhitleyIntel Cedar IslandIntel PurleyIntel Purley
Core ArchitectureLion Cove?Crestmont+Redwood CoveSierra GlenRaptor CoveGolden CoveSunny CoveCascade LakeCascade LakeSkylake
MCP (Multi-Chip Package) SKUsYesTBDYesYesYesYesNoNoYesNo
SocketLGA 4677 / 7529LGA 4677 / 7529LGA 4677 / 7529LGA 4677 / 7529LGA 4677LGA 4677LGA 4189LGA 4189LGA 3647LGA 3647
Max Core CountUp To 144?Up To 288Up To 136?Up To 288Up To 64?Up To 56Up To 40Up To 28Up To 28Up To 28
Max Thread CountUp To 288?Up To 288Up To 272?Up To 288Up To 128Up To 112Up To 80Up To 56Up To 56Up To 56
Max L3 CacheTBDTBDTBD108 MB L3320 MB L3105 MB L360 MB L338.5 MB L338.5 MB L338.5 MB L3
Memory SupportUp To 12-Channel DDR6-7200?TBDUp To 12-Channel DDR5-6400Up To 8-Channel DDR5-6400?Up To 8-Channel DDR5-5600Up To 8-Channel DDR5-4800Up To 8-Channel DDR4-3200Up To 6-Channel DDR4-3200DDR4-2933 6-ChannelDDR4-2666 6-Channel
PCIe Gen SupportPCIe 6.0 (128 Lanes)?TBDPCIe 5.0 (136 Lanes)PCIe 5.0 (TBD Lanes)PCIe 5.0 (80 Lanes)PCIe 5.0 (80 lanes)PCIe 4.0 (64 Lanes)PCIe 3.0 (48 Lanes)PCIe 3.0 (48 Lanes)PCIe 3.0 (48 Lanes)
TDP Range (PL1)Up To 500W?TBDUp To 500WUp To 350WUp To 350WUp To 350W105-270W150W-250W165W-205W140W-205W
3D Xpoint Optane DIMMDonahue Pass?TBDDonahue PassTBDCrow PassCrow PassBarlow PassBarlow PassApache PassN/A
CompetitionAMD EPYC VeniceAMD EPYC Zen 5CAMD EPYC TurinAMD EPYC BergamoAMD EPYC Genoa ~5nmAMD EPYC Genoa ~5nmAMD EPYC Milan 7nm+AMD EPYC Rome 7nmAMD EPYC Rome 7nmAMD EPYC Naples 14nm
Launch2025?202520242024202320222021202020182017

Comments
Welcome to yitit comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Login to display more comments
Hardware
Recent News
Copyright 2023-2026 - www.yitit.com All Rights Reserved