A huge range of Intel Sapphire Rapids-SP Xeon CPUs has been detailed in regards to their specs and positioning on the server platform. The specs were shared by YuuKi_AnS & include 23 SKUs that will be part of the family later this year.
Intel Sapphire Rapids-SP Xeon CPU Lineup Specs & Tiers Detailed, At Least 23 SKUs In The Works
The Sapphire Rapids-SP family will be replacing the Ice Lake-SP family and will go all on board with the 'Intel 7' process node (formerly 10nm Enhanced SuperFin) that will be making its formal debut later this year in the Alder Lake consumer family. The server lineup will feature the performance-optimized Golden Cove core architecture which delivers a 20% IPC improvement over Willow Cove core architecture. Several cores are featured on multiple tiles and packaged together through the use of EMIB.
Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group at Intel Corporation, displays a wafer holding 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) before the opening of Intel Vision 2022 on May 10 in Dallas. During the hybrid event, Intel’s leaders will announce advancements across silicon, software and services, showcasing how Intel brings together technologies and the ecosystem to unlock business value for customers today and in the future. (Credit: Walden Kirsch/Intel Corporation)
Intel Sapphire Rapids-SP 'Vanilla Xeon' CPUs:
For Sapphire Rapids-SP, Intel is using a quad multi-tile chiplet design which will come in HBM and non-HBM flavors. While each tile is its own unit, the chip itself acts as one singular SOC and each thread has full access to all resources on all tiles, consistently providing low-latency & high cross-section bandwidth across the entire SOC.
We have already taken an in-depth look at the P-Core over here but some of the key changes that will be offered to the data center platform will include AMX, AiA, FP16, and CLDEMOTE capabilities. The Accelerator Engines will increase the effectiveness of each core by offloading common-mode tasks to these dedicated accelerator engines which will increase performance & decrease the time taken to achieve the necessary task.
2 of 9
In terms of I/O advancements, Sapphire Rapids-SP Xeon CPUs will introduce CXL 1.1 for accelerator and memory expansion in the data center segment. There's also an improved multi-socket scaling via Intel UPI, delivering up to 4 x24 UPI links at 16 GT/s and a new 8S-4UPI performance-optimized topology. The new tile architecture design also boosts the cache beyond 100 MB along with Optane Persistent Memory 300 series support.
Intel Sapphire Rapids-SP 'HBM Xeon' CPUs:
Intel has also detailed its Sapphire Rapids-SP Xeon CPUs with HBM memory. From what Intel has shown, their Xeon CPUs will house up to four HBM packages, all offering significantly higher DRAM bandwidth versus a baseline Sapphire Rapids-SP Xeon CPU with 8-channel DDR5 memory. This is going to allow Intel to offer a chip with both increased capacity and bandwidth for customers that demand it. The HBM SKUs can be used in two modes, an HBM Flat mode & an HBM caching mode.
The standard Sapphire Rapids-SP Xeon chip will feature 10 EMIB interconnects and the entire package will measure at a mighty 4446mm2. Moving over to the HBM variant, we are getting an increased number of interconnects which sit at 14 and are needed to interconnect the HBM2E memory to the cores.
The four HBM2E memory packages will feature 8-Hi stacks so Intel is going for at least 16 GB of HBM2E memory per stack for a total of 64 GB across the Sapphire Rapids-SP package. Talking about the package, the HBM variant will measure at an insane 5700mm2 or 28% larger than the standard variant. Compared to the recently leaked EPYC Genoa numbers, the HBM2E package for Sapphire Rapids-SP would end up 5% larger while the standard package will be 22% smaller.
Intel Sapphire Rapids-SP Xeon (Standard Package) -4446mm2Intel Sapphire Rapids-SP Xeon (HBM2E Package) -5700mm2AMD EPYC Genoa (12 CCD Package) -5428mm2
Intel Sapphire Rapids-SP Xeon CPU Platform
The Sapphire Rapids lineup will make use of 8-channel DDR5 memory with speeds of up to 4800 Mbps & support PCIe Gen 5.0 on the Eagle Stream platform (C740 chipset).
The Eagle Stream platform will also introduce the LGA 4677 socket which will be replacing the LGA 4189 socket for Intel's upcoming Cedar Island & Whitley platform which would house Cooper Lake-SP and Ice Lake-SP processors, respectively. The Intel Sapphire Rapids-SP Xeon CPUs will also come with CXL 1.1 interconnect that will mark a huge milestone for the blue team in the server segment.
The final 4th Gen Sapphire Rapids-SP Xeon CPU with its multi-chiplet design housing Compute & HBM2e tiles. (Image Credits: CNET)
Coming to the configurations, the top part is started to feature 60 cores with a TDP of 350W. What is interesting about this configuration is that it is listed as a low-bin split variant which means that it will be using a tile or MCM design. The Sapphire Rapids-SP Xeon CPU will be composed of a 4-tile layout with each tile featuring 14 cores.
Following are the expected configurations:
Sapphire Rapids-SP 24 Core / 48 Thread / 45.0 MB / 225WSapphire Rapids-SP 28 Core / 56 Thread / 52.5 MB / 250WSapphire Rapids-SP 40 Core / 48 Thread / 75.0 MB / 300WSapphire Rapids-SP 44 Core / 88 Thread / 82.5 MB / 270WSapphire Rapids-SP 48 Core / 96 Thread / 90.0 MB / 350WSapphire Rapids-SP 56 Core / 112 Thread / 105 MB / 350WSapphire Rapids-SP 60 Core / 120 Thread / 110 MB / 350W
Now based on the specifications provided by YuuKi_AnS, the Intel Sapphire Rapids-SP Xeon CPUs will come in four tiers:
Bronze Tier: 150-185W TDPSilver Tier: 205-250W TDPGold Tier: 270-300W TDPPlatinum Tier: 300-350W+ TDP
The TDPs listed here are at PL1 rating so the PL2 rating, as seen earlier, is going to be very high in the 400W+ range and the BIOS limit is expected to hover at around 700W+. Most of the CPU SKUs listed by the leaker is still in ES1/ES2 state which means that they are far from the final retail chip but the core configurations are likely to remain the same. Intel will offer various SKUs with same but different bins, affecting their clocks/TDPs. For example, there are four 44 core parts with 82.5 MB cache listed but clock speeds should vary across each SKU. There's also one Sapphire Rapids-SP HBM 'Gold' CPU in its A0 revision which has 48 cores, 96 threads, and 90 MB of cache with a TDP of 350W. There's also the 60 core part that leaked a while back but it isn't listed yet. Following is the entire SKU list that has been leaked:
Intel Sapphire Rapids-SP Xeon CPU SKUs List (Preliminary):
CPU Name | Cores/Threads | L3 Cache | CPU Base Clock | CPU (Single-Core) Boost | CPU (Max) Boost | TDP |
---|---|---|---|---|---|---|
Xeon Platinum 8490H | 60/120 | 112.5 MB | 1.9 GHz | 2.9 GHz | 3.5 GHz | 350W |
Xeon Platinum 8480+ | 56/112 | 105 MB | 2.0 GHz | 3.0 GHz | 3.8 GHz | 350W |
Xeon Platinum 8471N | 52/104 | 97.5 MB | 1.8 GHz | 2.8 GHz | 3.6 GHz | 300W |
Xeon Platinum 8470Q | 52/104 | 105 MB | 2.0 GHz | 3.0 GHz | 3.8 GHz | 350W |
Xeon Platinum 8470N | 52/104 | 97.5 MB | 1.7 GHz | 2.7 GHz | 3.6 GHz | 300W |
Xeon Platinum 8470 | 52/104 | 97.5 MB | 2.0 GHz | 3.0 GHz | 3.8 GHz | 350W |
Xeon Platinum 8468V | 48/96 | 97.5 MB | 2.4 GHz | 2.9 GHz | 3.8 GHz | 330W |
Xeon Platinum 8468H | 48/96 | 105 MB | 2.1 GHz | 3.0 GHz | 3.8 GHz | 330W |
Xeon Platinum 8468+ | 48/96 | 90.0 MB | 2.1 GHz | 3.1 GHz | 3.8 GHz | 350W |
Xeon Platinum 8461V | 48/96 | 97.5 MB | 2.2 GHz | 2.8 GHz | 3.7 GHz | 300W |
Xeon Platinum 8460Y | 40/80 | 75.0 MB | 2.0 GHz | 2.8 GHz | 3.7 GHz | 300W |
Xeon Platinum 8460H | 40/80 | 105 MB | 2.2 GHz | 3.1 GHz | 3.8 GHz | 330W |
Xeon Platinum 8458P | 44/88 | 82.5 MB | 2.7 GHz | 3.2 GHz | 3.8 GHz | 350W |
Xeon Platinum 8454H | 32/64 | 82.5 MB | 2.1 GHz | 2.7 GHz | 3.4 GHz | 270W |
Xeon Platinum 8452Y | 36/72 | 67.5 MB | 2.0 GHz | 2.8 GHz | 3.2 GHz | 300W |
Xeon Platinum 8450H | 28/56 | 75.0 MB | 2.0 GHz | 2.6 GHz | 3.5 GHz | 250W |
Xeon Platinum 8444H | 16/32 | 45.0 MB | 2.0 GHz | -2.8 GHz | 4.0 GHz | 270W |
Xeon Gold 6454Y+ | 32/64 | 60.0 MB | 2.6 GHz | 3.8 GHz | TBD | 270W |
Xeon Gold 6454S | 32/64 | 60.0 MB | 2.2 GHz | 2.8 GHz | 3.4 GHz | 270W |
Xeon Gold 6448Y | 32/64 | 60.0 MB | 2.2 GHz | 3.3 GHz | TBD | 225W |
Xeon Gold 6448H | 32/64 | 60.0 MB | 2.2 GHz | 3.2 GHz | TBD | 225W |
Xeon Gold 6444Y | 16/32 | 30.0 MB | 3.5 GHz | 4.1 GHz | TBD | 270W |
Xeon Gold 6442Y | 24/48 | 45.0 MB | 2.6 GHz | 3.0 GHz | TBD | 225W |
Xeon Gold 6441V | 44/88 | 82.5 MB | 2.1 GHz | 2.6 GHz | 3.5 GHz | 270W |
Xeon Gold 6438Y+ | 32/64 | 60.0 MB | 1.9 GHz | 3.0 GHz | TBD | 205W |
Xeon Gold 6438N | 32/64 | 60.0 MB | 2.0 GHz | 3.0 GHz | TBD | 205W |
Xeon Gold 6438M | 32/64 | 60.0 MB | 2.3 GHz | 3.1 GHz | TBD | 205W |
Xeon Gold 6434H | 8/16 | 15.0 MB | 4.0 GHz | 4.1 GHz | TBD | 205W |
Xeon Gold 6434 | 8/16 | 15.0 MB | 3.9 GHz | 4.2 GHz | TBD | 205W |
Xeon Gold 6430 | 32/64 | 60.0 MB | 1.9 GHz | 3.0 GHz | 3.4 GHz | 270W |
Xeon Gold 6428N | 32/64 | 60.0 MB | 1.8 GHz | 2.7 GHz | TBD | 185W |
Xeon Gold 6426Y | 16/32 | 30.0 MB | 2.6 GHz | 3.5 GHz | TBD | 185W |
Xeon Gold 6421N | 32/64 | 60.0 MB | 1.8 GHz | 2.8 GHz | TBD | 185W |
Xeon Gold 6418H | 24/48 | 45.0 MB | 2.0 GHz | 3.0 GHz | TBD | 185W |
Xeon Gold 6416H | 18/36 | 33.75 MB | 2.2 GHz | 3.0 GHz | TBD | 165W |
Xeon Gold 6414U | 32/64 | 60.0 MB | 2.0 GHz | 2.6 GHz | 3.4 GHz | 250W |
Xeon Gold 5420+ | 28/56 | 52.5 MB | 1.9 GHz | 2.1 GHz | TBD | 205W |
Xeon Gold 5418Y | 24/48 | 45.0 MB | 2.1 GHz | 2.9 GHz | TBD | 185W |
Xeon Gold 5418N | 24/48 | 45.0 MB | 2.0 GHz | 2.8 GHz | TBD | 165W |
Xeon Gold 5416S | 16/32 | 30.0 MB | 2.1 GHz | 2.9 GHz | TBD | 150W |
Xeon Gold 5415+ | 8/16 | 15.0 MB | 2.9 GHz | 3.7 GHz | TBD | 150W |
Xeon Gold 5411N | 24/48 | 45.0 MB | 2.0 GHz | 2.8 GHz | TBD | 165W |
Xeon Silver 4416+ | 20/40 | 37.5 MB | 2.1 GHz | 3.0 GHz | TBD | 165W |
Xeon Silver 4410T | 12/24 | 22.5 MB | 2.0 GHz | 3.0 GHz | TBD | 145W |
Xeon Silver 4410T | 10/20 | 18.75 MB | 2.9 GHz | 3.0 GHz | TBD | 150W |
Xeon Bronze 3408U | 8/16 | 15.0 MB | 1.8 GHz | 1.9 GHz | TBD | 150W |
Once again, most of these configurations aren't in the final spec since they are still early samples. The parts in red with A/B/C stepping are said to be unusable and can only be used with a special BIOS which still has a lot of bugs. This list does provide us with an idea of what to expect in terms of SKUs and tiers but we will have to wait for the official announcement later this year for accurate specs for each SKU.
It looks like AMD will still hold the upper hand in the number of cores & threads offered per CPU with their Genoa chips pushing for up to 96 cores whereas Intel Xeon chips would max out at 60 cores if they don't plan on making SKUs with a higher number of tiles. Intel will have a wider and more expandable platform that can support up to 8 CPUs at once so unless Genoa offers more than 2P (dual-socket) configurations, Intel will have the lead in the most number of cores per rack with an 8S rack packing up to 480 cores and 960 threads.
Recently, Intel announced during its Vision event that the company is shipping its initial Sapphire-Rapids-SP Xeon SKUs to customers and are on track for a Q4 2022 launch.
Intel Xeon CPU Families (Preliminary):
Family Branding | Diamond Rapids | Clearwater Forest | Granite Rapids | Sierra Forest | Emerald Rapids | Sapphire Rapids | Ice Lake-SP | Cooper Lake-SP | Cascade Lake-SP/AP | Skylake-SP |
---|---|---|---|---|---|---|---|---|---|---|
Process Node | Intel 20A? | Intel 18A | Intel 3 | Intel 3 | Intel 7 | Intel 7 | 10nm+ | 14nm++ | 14nm++ | 14nm+ |
Platform Name | Intel Mountain Stream Intel Birch Stream | Intel Mountain Stream Intel Birch Stream | Intel Mountain Stream Intel Birch Stream | Intel Mountain Stream Intel Birch Stream | Intel Eagle Stream | Intel Eagle Stream | Intel Whitley | Intel Cedar Island | Intel Purley | Intel Purley |
Core Architecture | Lion Cove? | Crestmont+ | Redwood Cove | Sierra Glen | Raptor Cove | Golden Cove | Sunny Cove | Cascade Lake | Cascade Lake | Skylake |
MCP (Multi-Chip Package) SKUs | Yes | TBD | Yes | Yes | Yes | Yes | No | No | Yes | No |
Socket | LGA 4677 / 7529 | LGA 4677 / 7529 | LGA 4677 / 7529 | LGA 4677 / 7529 | LGA 4677 | LGA 4677 | LGA 4189 | LGA 4189 | LGA 3647 | LGA 3647 |
Max Core Count | Up To 144? | Up To 288 | Up To 136? | Up To 288 | Up To 64? | Up To 56 | Up To 40 | Up To 28 | Up To 28 | Up To 28 |
Max Thread Count | Up To 288? | Up To 288 | Up To 272? | Up To 288 | Up To 128 | Up To 112 | Up To 80 | Up To 56 | Up To 56 | Up To 56 |
Max L3 Cache | TBD | TBD | TBD | 108 MB L3 | 320 MB L3 | 105 MB L3 | 60 MB L3 | 38.5 MB L3 | 38.5 MB L3 | 38.5 MB L3 |
Memory Support | Up To 12-Channel DDR6-7200? | TBD | Up To 12-Channel DDR5-6400 | Up To 8-Channel DDR5-6400? | Up To 8-Channel DDR5-5600 | Up To 8-Channel DDR5-4800 | Up To 8-Channel DDR4-3200 | Up To 6-Channel DDR4-3200 | DDR4-2933 6-Channel | DDR4-2666 6-Channel |
PCIe Gen Support | PCIe 6.0 (128 Lanes)? | TBD | PCIe 5.0 (136 Lanes) | PCIe 5.0 (TBD Lanes) | PCIe 5.0 (80 Lanes) | PCIe 5.0 (80 lanes) | PCIe 4.0 (64 Lanes) | PCIe 3.0 (48 Lanes) | PCIe 3.0 (48 Lanes) | PCIe 3.0 (48 Lanes) |
TDP Range (PL1) | Up To 500W? | TBD | Up To 500W | Up To 350W | Up To 350W | Up To 350W | 105-270W | 150W-250W | 165W-205W | 140W-205W |
3D Xpoint Optane DIMM | Donahue Pass? | TBD | Donahue Pass | TBD | Crow Pass | Crow Pass | Barlow Pass | Barlow Pass | Apache Pass | N/A |
Competition | AMD EPYC Venice | AMD EPYC Zen 5C | AMD EPYC Turin | AMD EPYC Bergamo | AMD EPYC Genoa ~5nm | AMD EPYC Genoa ~5nm | AMD EPYC Milan 7nm+ | AMD EPYC Rome 7nm | AMD EPYC Rome 7nm | AMD EPYC Naples 14nm |
Launch | 2025? | 2025 | 2024 | 2024 | 2023 | 2022 | 2021 | 2020 | 2018 | 2017 |