AMD's CEO has confirmed that the successor to the CDNA architecture-powered Instinct GPU HPC accelerator is on its way for launch later this year.
AMD Instinct MI200 With CDNA 2 MCM GPU Architecture Landing Later This Year, Will Power HPC Workloads
The confirmation came during JPMorgan's 49th Annual Global Technology Communications and Media conference. AMD's CEO, Lisa Su, stated that they will be launching the next generation of CDNA architecture later this year. Following is the transcript from the conference (Source: Seeking Alpha).
Last year, we talked about our first-generation CDNA architecture. This year, as I said, we’re putting together our next-generation CDNA architecture. This is actually a key component that enabled us to win the largest supercomputer bids in the US around the Frontier Oak Ridge National Labs installment as well as the Lawrence Livermore National Labs installment with El Capitan and many others.
But it’s a coherent interconnect between CPUs and GPUs that allow us to fully optimize for HPC and for AI and ML applications. And we will be launching the next generation of that architecture, actually, later this year.We’re very excited about it. I think it’s progressed extremely well. It’s the next big step in sort of innovation around the data center architectures.
Dr. Lisa Su (AMD CEO)
Here's Everything We Know About AMD's CDNA 2 Architecture Powered Instinct Accelerators
The AMD CDNA 2 architecture will be powering the next-generation AMD Instinct HPC accelerators. We know that one of those accelerators will be the MI200 which will feature the Aldebaran GPU. It's going to be a very powerful chip and possibly the first GPU to feature an MCM design. The Instinct MI200 is going to compete against Intel's 7nm Ponte Vecchio and NVIDIA's refreshed Ampere parts. Intel and NVIDIA are also following the MCM route on their next-generation HPC accelerators but it looks like Ponte Vecchio is going to be available in 2022 and the same can be said for NVIDIA's next-gen HPC accelerator as their own roadmap confirmed.
In the previous Linux patch, it was revealed that l that the AMD Instinct MI200 'Aldebaran' GPU will feature HBM2E memory support. NVIDIA was the first to hop on board the HBM2E standard and will offer a nice boost over the standard HBM2 configuration used on the Arcturus-based MI100 GPU accelerator. HBM2E allows up to 16 GB memory capacity per stack so we can expect up to 64 GB HBM2E memory at blisteringly fast speeds for Aldebaran.
The latest Linux Kernel Patch revealed that the GPU carries 16 KB of L1 cache per CU which makes up 2 MB of the total L1 cache considering that the GPU will be packing 128 Compute Units. The GPU also carries 8 MB of shared L2 cache but carries 14 CUs per Shader Engine compared to 16 CUs per SE in the previous Instinct lineup. Regardless, it is stated that each CU on Aldebaran GPUs will have a significantly higher computing output.
Other features listed include SDMA (System Direct Memory Access) support which will allow data transfers over PCIe and XGMI/Infinity Cache subsystems. As far as Infinity Cache is concerned, it's looking like that won't be happening on HPC GPUs. Do note that AMD's CDNA 2 GPU will be fabricated on a brand new process node & are confirmed to feature a 3rd Generation AMD Infinity architecture that extends to Exascale by allowing up to 8-Way coherent GPU connectivity.
AMD Radeon Instinct Accelerators
Accelerator Name | AMD Instinct MI400 | AMD Instinct MI300X | AMD Instinct MI300A | AMD Instinct MI250X | AMD Instinct MI250 | AMD Instinct MI210 | AMD Instinct MI100 | AMD Radeon Instinct MI60 | AMD Radeon Instinct MI50 | AMD Radeon Instinct MI25 | AMD Radeon Instinct MI8 | AMD Radeon Instinct MI6 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
CPU Architecture | Zen 5 (Exascale APU) | N/A | Zen 4 (Exascale APU) | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
GPU Architecture | CDNA 4 | Aqua Vanjaram (CDNA 3) | Aqua Vanjaram (CDNA 3) | Aldebaran (CDNA 2) | Aldebaran (CDNA 2) | Aldebaran (CDNA 2) | Arcturus (CDNA 1) | Vega 20 | Vega 20 | Vega 10 | Fiji XT | Polaris 10 |
GPU Process Node | 4nm | 5nm+6nm | 5nm+6nm | 6nm | 6nm | 6nm | 7nm FinFET | 7nm FinFET | 7nm FinFET | 14nm FinFET | 28nm | 14nm FinFET |
GPU Chiplets | TBD | 8 (MCM) | 8 (MCM) | 2 (MCM) 1 (Per Die) | 2 (MCM) 1 (Per Die) | 2 (MCM) 1 (Per Die) | 1 (Monolithic) | 1 (Monolithic) | 1 (Monolithic) | 1 (Monolithic) | 1 (Monolithic) | 1 (Monolithic) |
GPU Cores | TBD | 19,456 | 14,592 | 14,080 | 13,312 | 6656 | 7680 | 4096 | 3840 | 4096 | 4096 | 2304 |
GPU Clock Speed | TBD | 2100 MHz | 2100 MHz | 1700 MHz | 1700 MHz | 1700 MHz | 1500 MHz | 1800 MHz | 1725 MHz | 1500 MHz | 1000 MHz | 1237 MHz |
INT8 Compute | TBD | 2614 TOPS | 1961 TOPS | 383 TOPs | 362 TOPS | 181 TOPS | 92.3 TOPS | N/A | N/A | N/A | N/A | N/A |
FP16 Compute | TBD | 1.3 PFLOPs | 980.6 TFLOPs | 383 TFLOPs | 362 TFLOPs | 181 TFLOPs | 185 TFLOPs | 29.5 TFLOPs | 26.5 TFLOPs | 24.6 TFLOPs | 8.2 TFLOPs | 5.7 TFLOPs |
FP32 Compute | TBD | 163.4 TFLOPs | 122.6 TFLOPs | 95.7 TFLOPs | 90.5 TFLOPs | 45.3 TFLOPs | 23.1 TFLOPs | 14.7 TFLOPs | 13.3 TFLOPs | 12.3 TFLOPs | 8.2 TFLOPs | 5.7 TFLOPs |
FP64 Compute | TBD | 81.7 TFLOPs | 61.3 TFLOPs | 47.9 TFLOPs | 45.3 TFLOPs | 22.6 TFLOPs | 11.5 TFLOPs | 7.4 TFLOPs | 6.6 TFLOPs | 768 GFLOPs | 512 GFLOPs | 384 GFLOPs |
VRAM | TBD | 192 GB HBM3 | 128 GB HBM3 | 128 GB HBM2e | 128 GB HBM2e | 64 GB HBM2e | 32 GB HBM2 | 32 GB HBM2 | 16 GB HBM2 | 16 GB HBM2 | 4 GB HBM1 | 16 GB GDDR5 |
Infinity Cache | TBD | 256 MB | 256 MB | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
Memory Clock | TBD | 5.2 Gbps | 5.2 Gbps | 3.2 Gbps | 3.2 Gbps | 3.2 Gbps | 1200 MHz | 1000 MHz | 1000 MHz | 945 MHz | 500 MHz | 1750 MHz |
Memory Bus | TBD | 8192-bit | 8192-bit | 8192-bit | 8192-bit | 4096-bit | 4096-bit bus | 4096-bit bus | 4096-bit bus | 2048-bit bus | 4096-bit bus | 256-bit bus |
Memory Bandwidth | TBD | 5.3 TB/s | 5.3 TB/s | 3.2 TB/s | 3.2 TB/s | 1.6 TB/s | 1.23 TB/s | 1 TB/s | 1 TB/s | 484 GB/s | 512 GB/s | 224 GB/s |
Form Factor | TBD | OAM | APU SH5 Socket | OAM | OAM | Dual Slot Card | Dual Slot, Full Length | Dual Slot, Full Length | Dual Slot, Full Length | Dual Slot, Full Length | Dual Slot, Half Length | Single Slot, Full Length |
Cooling | TBD | Passive Cooling | Passive Cooling | Passive Cooling | Passive Cooling | Passive Cooling | Passive Cooling | Passive Cooling | Passive Cooling | Passive Cooling | Passive Cooling | Passive Cooling |
TDP (Max) | TBD | 750W | 760W | 560W | 500W | 300W | 300W | 300W | 300W | 300W | 175W | 150W |