yitit
Home
/
Hardware
/
AMD Instinct MI300 GPU To Utilize Quad MCM ‘CDNA 3’ GPUs: Feature 3D Stacking With Up To 8 Compute Dies, HBM3, PCIe Gen 5.0 & 600W TDP
AMD Instinct MI300 GPU To Utilize Quad MCM ‘CDNA 3’ GPUs: Feature 3D Stacking With Up To 8 Compute Dies, HBM3, PCIe Gen 5.0 & 600W TDP-February 2024
Feb 13, 2026 12:44 AM

AMD Instinct MI300 GPUs which will be powered by the next-generation CDNA 3 architecture have been detailed by Moore's Law is Dead. The new GPUs will be powering the upcoming data centers and are rumored to be the first to incorporate a 3D-Stacking design.

AMD Instinct MI300 GPUs Rumored To Go All-Onboard With 3D-Stacking Design: Up To Four GPU Chiplets With 8 Compute Dies, HBM3 & PCIe Gen 5.0 at 600W

Last year, @Kepler_L2 revealed that the AMD Instinct MI300 was going to feature four Graphics Compute Dies. Later this was confirmed in a patch where the chip appeared as the 'GFX940' part. This was essentially going to double the MI250X which features two GCDs but the difference is that each GCD will feature two Compute dies. So for the Instinct MI300, we are going to get up to 8 GCDs on the top variant. In fact, the Instinct MI300 family will not be a singular GPU but will comprise several different configurations.

AMD Instinct MI300 'CDNA 3' GPU details have been revealed by Moore's Law is Dead.

AMD Instinct MI300 'CDNA 3' GPU details have been revealed by Moore's Law is Dead.

The top AMD Instinct MI300 GPU will feature a massive interposer that measures around ~2750 mm2. The interposer has a very interesting configuration that packs four 6nm tiles that contain the I/O controllers, IP Blocks and measure around ~320-360mm2. These tiles are based on a 6nm node and may also include some form of cache though that's not confirmed yet. Now on top of these IO stacks, AMD will be using the brand new 3D-Stacking technology to incorporate two Compute Dies.

These brand new AMD CDNA 3 architecture-based Compute Dies will be fabricated on a 5nm node and feature a die size of around 110mm2 per tile. Currently, there's no word about how many core or accelerator blocks each Compute die will hold but if we keep the same SP/core count as MI250X, we get up to 28,160 cores but once again, this is just mere speculation since a lot can change within CDNA 3. Since the memory controllers are onboard the bottom I/O die, they are connected to two stacks of HBM3 using more than 12 metal layers. Each die is interconnected using a total of 20,000 connections which is double what Apple is using on the M1 Ultra as a part of the UltraFusion chip design.

2022-04-28_9-16-17-low_res-scale-2_00x-custom

2022-04-28_9-14-25-low_res-scale-2_00x-custom

2022-04-28_9-14-39-low_res-scale-2_00x-custom

2022-04-28_9-14-43-low_res-scale-2_00x-custom

2022-04-28_9-14-49-low_res-scale-2_00x-custom

2022-04-28_9-15-39-low_res-scale-2_00x-custom

2 of 9

HBM Memory Specifications Comparison

DRAMHBM1HBM2HBM2eHBM3HBM3 Gen2HBMNext (HBM4)
I/O (Bus Interface)10241024102410241024-20481024-2048
Prefetch (I/O)222222
Maximum Bandwidth128 GB/s256 GB/s460.8 GB/s819.2 GB/s1.2 TB/s1.5 - 2.0 TB/s
DRAM ICs Per Stack488128-128-12
Maximum Capacity4 GB8 GB16 GB24 GB24 - 36 GB36-64 GB
tRC48ns45ns45nsTBATBATBA
tCCD2ns (=1tCK)2ns (=1tCK)2ns (=1tCK)TBATBATBA
VPPExternal VPPExternal VPPExternal VPP
External VPP
External VPPTBA
VDD1.2V1.2V1.2VTBATBATBA
Command InputDual CommandDual CommandDual CommandDual CommandDual CommandDual Command

Comments
Welcome to yitit comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Login to display more comments
Hardware
Recent News
Copyright 2023-2026 - www.yitit.com All Rights Reserved