AMD Instinct
Release date | June 20, 2017 |
---|---|
Architecture |
|
Models | MI Series |
Transistors |
|
Fabrication process | |
History | |
Predecessor |
AMD Instinct is AMD's brand of professional GPUs.[1][2] It replaced AMD's FirePro S brand in 2016. Compared to the Radeon brand of mainstream consumer/gamer products, the Instinct product line is intended to accelerate deep learning, artificial neural network, and high-performance computing/GPGPU applications.
The Radeon Instinct product line directly competes with Nvidia's Ampere and Intel Xeon Phi and incoming Intel Xe lines of machine learning and GPGPU cards.
Before MI100 introduction in November 2020, the Instinct family was known as AMD Radeon Instinct, AMD dropped the Radeon brand from its name.
Supercomputers based on (AMD CPUs and) AMD Instinct GPUs now take the lead on the Green500 supercomputer list with over 50% lead over any other, and top the first 4 spots, including the second, which is the current fastest in the world on the TOP500 list, Frontier.
Products
The three initial Radeon Instinct products were announced on December 12, 2016, and released on June 20, 2017, with each based on a different architecture.[3][4]
MI6
The MI6 is a passively cooled, Polaris 10 based card with 16 GB of GDDR5 memory and with a <150 W TDP.[1][2] At 5.7 TFLOPS (FP16 and FP32), the MI6 is expected to be used primarily for inference, rather than neural network training. The MI6 has a peak double precision (FP64) compute performance of 358 GFLOPS.[5]
MI8
The MI8 is a Fiji based card, analogous to the R9 Nano, and expected to have a <175W TDP.[1] The MI8 has 4 GB of High Bandwidth Memory. At 8.2 TFLOPS (FP16 and FP32), the MI8 is marked toward inference. The MI8 has a peak (FP64) double precision compute performance 512 GFLOPS.[6]
MI25
The MI25 is a Vega based card, utilizing HBM2 memory. The MI25 performance is expected to be 12.3 TFLOPS using FP32 numbers. In contrast to the MI6 and MI8, the MI25 is able to increase performance when using lower precision numbers, and accordingly is expected to reach 24.6 TFLOPS when using FP16 numbers. The MI25 is rated at <300W TDP with passive cooling. The MI25 also provides 768 GFLOPS peak double precision (FP64) at 1/16th rate.[7]
MI300 Series
The MI300A and MI300X are data center accelerators that use the CDNA 3 architecture, which is optimized for high-performance computing (HPC) and generative artificial intelligence (AI) workloads. The CDNA 3 architecture features a scalable multi-chip module (MCM) design that leverages TSMC’s advanced packaging technologies, such as CoWoS (chip-on-wafer-on-substrate) and InFO (integrated fan-out), to combine multiple chiplets on a single interposer. The chiplets are interconnected by AMD’s Infinity Fabric, which enables high-speed and low-latency data transfer between the chiplets and the host system.
The MI300A is an accelerated processing unit (APU) that integrates 24 Zen 4 CPU cores with four CDNA 3 GPU cores, resulting in a total of 228 CUs in the GPU section and 128 GB of HBM3 memory. The Zen 4 CPU cores are based on the 5 nm process node and support the x86-64 instruction set, as well as AVX-512 and BFloat16 extensions. The Zen 4 CPU cores can run general-purpose applications and provide host-side computation for the GPU cores. The MI300A has a peak performance of 61.3 TFLOPS of FP64 (122.6 TFLOPS FP64 matrix) and 980.6 TFLOPS of FP16 (1961.2 TFLOPS with sparsity), as well as 5.3 TB/s of memory bandwidth. The MI300A supports PCIe 5.0 and CXL 2.0 interfaces, which allow it to communicate with other devices and accelerators in a heterogeneous system.
The MI300X is a dedicated generative AI accelerator that replaces the CPU cores with additional GPU cores and HBM memory, resulting in a total of 304 CUs and 192 GB of HBM3 memory. The MI300X is designed to accelerate generative AI applications, such as natural language processing, computer vision, and deep learning. The MI300X has a peak performance of 653.7 TFLOPS of TP32 (1307.4 TFLOPS with sparsity) and 1307.4 TFLOPS of FP16 (2614.9 TFLOPS with sparsity), as well as 5.3 TB/s of memory bandwidth. The MI300X also supports PCIe 5.0 and CXL 2.0 interfaces, as well as AMD’s ROCm software stack, which provides a unified programming model and tools for developing and deploying generative AI applications on AMD hardware.[8][9][10]
Accelerator | Architecture | Lithography | Compute Units | Memory | Memory Type | PCIe Support | Form Factor | FP16 Performance | BF16 Performance | FP32 Performance | FP32 Matrix Performance | FP64 Performance | FP64 Matrix Performance | INT8 Performance | INT4 Performance | TBP Peak |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MI6 | GCN 4 | 14 nm | 36 | 16 GB | GDDR5 | 3.0 | PCIe | 5.7 TFLOPS | N/A | 5.7 TFLOPS | N/A | 358 GFLOPS | N/A | N/A | N/A | 150 W |
MI8 | GCN 3 | 28 nm | 64 | 4 GB | HBM | 8.2 TFLOPS | 8.2 TFLOPS | 512 GFLOPS | 175 W | |||||||
MI25 | GCN 5 | 14 nm | 64 | 16 GB | HBM2 | 26.4 TFLOPS | 12.3 TFLOPS | 768 GFLOPS | 300 W | |||||||
MI50 | GCN 5 | 7 nm | 60 | 4.0 | 26.5 TFLOPS | 13.3 TFLOPS | 6.6 TFLOPS | 53 TOPS | 300 W | |||||||
MI60 | GCN 5 | 64 | 32 GB | 29.5 TFLOPS | 14.7 TFLOPS | 7.4 TFLOPS | 59 TOPS | 300 W | ||||||||
MI100 | CDNA | 120 | 184.6 TFLOPS | 92.3 TFLOPS | 23.1 TFLOPS | 46.1 TFLOPS | 11.5 TFLOPS | 184.6 TOPS | 300 W | |||||||
MI210 | CDNA 2 | 6 nm | 104 | 64 GB | HBM2e | 181 TFLOPS | 22.6 TFLOPS | 45.3 TFLOPS | 22.6 TFLOPS | 45.3 TFLOPS | 181 TOPS | 300 W | ||||
MI250 | 208 | 128 GB | OAM | 362.1 TFLOPS | 45.3 TFLOPS | 90.5 TFLOPS | 45.3 TFLOPS | 90.5 TFLOPS | 362.1 TOPS | 560 W | ||||||
MI250X | 220 | 383 TFLOPS | 47.92 TFLOPS | 95.7 TFLOPS | 47.9 TFLOPS | 95.7 TFLOPS | 383 TOPS | 560 W | ||||||||
MI300A | CDNA 3 | 6 & 5 nm | 228 | 128 GB | HBM3 | 5.0 | APU SH5 socket | 980.6 TFLOPS 1961.2 TFLOPS (With Sparsity) |
122.6 TFLOPS | 61.3 TFLOPS | 122.6 TFLOPS | 1961.2 TOPS 3922.3 TOPS (With Sparsity) |
N/A | 550 W 760 W (Liquid Cooling) | ||
MI300X | 304 | 192 GB | OAM | 1307.4 TFLOPS 2614.9 TFLOPS (With Sparsity) |
163.4 TFLOPS | 81.7 TFLOPS | 163.4 TFLOPS | 2614.9 TOPS 5229.8 TOPS (With Sparsity) |
N/A | 750 W |
Software
ROCm
Following software is, as of 2022, regrouped under the Radeon Open Compute meta-project.
MxGPU
The MI6, MI8, and MI25 products all support AMD's MxGPU virtualization technology, enabling sharing of GPU resources across multiple users.[1][11]
MIOpen
MIOpen is AMD's deep learning library to enable GPU acceleration of deep learning.[1] Much of this extends the GPUOpen's Boltzmann Initiative software.[11] This is intended to compete with the deep learning portions of Nvidia's CUDA library. It supports the deep learning frameworks: Theano, Caffe, TensorFlow, MXNet, Microsoft Cognitive Toolkit, Torch, and Chainer. Programming is supported in OpenCL and Python, in addition to supporting the compilation of CUDA through AMD's Heterogeneous-compute Interface for Portability and Heterogeneous Compute Compiler.
Chipset table
Model (codename) |
Release Date | Architecture & Fab |
Transistors & Die Size |
Core | Fillrate[lower-alpha 1][lower-alpha 2][lower-alpha 3] | Processing power[lower-alpha 1][lower-alpha 4] (GFLOPS) |
Memory | TBP | Bus interface | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Config[lower-alpha 5] | Clock[lower-alpha 1] (MHz) | Texture (GT/s) |
Pixel (GP/s) |
Half | Single | Double | Bus type & width |
Size (GiB) |
Clock (MT/s) |
Bandwidth (GB/s) | ||||||
Radeon Instinct MI6 (Polaris 10) [1][12][13][14][15] |
2016 | GCN 4th gen 14 nm |
5.7×109 232 mm2 |
2304:144:32 36 CU |
1120 1233 |
177.6 | 39.46 | 5800 | 5800 | 358 | GDDR5 256-bit |
16 | 7000 | 224 | 150 W | PCIe 3.0 x16 |
Radeon Instinct MI8 (Fiji XT) [1][12][13][16][17] |
GCN 3rd gen 28 nm |
8.9×109 596 mm2 |
4096:256:64 64 CU |
1000 | 256.0 | 64.0 | 8200 | 8200 | 512 | HBM 4096-bit |
4 | 1000 | 512 | 175 W | ||
Radeon Instinct MI25 (Vega 10 XT)[1][12][13][18][19][20] |
GCN 5th gen 14 nm |
12.5×109 510 mm2 |
4096:256:64 64 CU |
1400 1500 |
384 | 96.0 | 24600 | 12300 | 768 | HBM2 2048-bit |
16 | 1704 | 436.2 | 300 W | ||
Radeon Instinct MI25 mxgpu (Prototype, 2017)[21] |
Unreleased | 2× 12.5×109 510 mm2 |
2× 4096:256:64 64 CU |
1400 1500 |
2× 384 |
2× 96.0 |
2× 24600 |
2× 12300 |
2× 768 |
2× 16 |
2× 436.2 |
300 W | ||||
Radeon Instinct MI50 (Vega 20 GL)[22][23][24][25] |
2018 | GCN 5th gen 7 nm |
13.2×109 331 mm2 |
3840:240:- 60 CU |
1450 1746 |
419.04 | - | 26800 | 13400 | 6700 | HBM2 4096-bit |
16 | 2000 | 1024 | 300 W | PCIe 4.0 x16 |
Radeon Instinct MI60 (Vega 20 GL)[22][26][27] |
4096:256:- 64 CU |
1500 1800 |
460.8 | - | 29450 | 14725 | 7362.5 | 32 | 1024 | 300 W |
- ↑ 1.0 1.1 1.2 Boost values (if available) are stated below the base value in italic.
- ↑ Texture fillrate is calculated as the number of texture mapping units multiplied by the base (or boost) core clock speed.
- ↑ Pixel fillrate is calculated as the number of render output units multiplied by the base (or boost) core clock speed.
- ↑ Precision performance is calculated from the base (or boost) core clock speed based on a FMA operation.
- ↑ Unified Shaders : Texture Mapping Units : Render Output Units and Compute Units (CU)
See also
- ROCm - AMD's open compute software stack
- AMD FirePro - AMD's predecessor to Radeon Instinct
- AMD Radeon Pro - AMD's workstation graphics and GPGPU solution
- Nvidia Quadro - Nvidia's competing workstation graphics solution
- Nvidia Tesla - Nvidia's competing GPGPU solution
- Xeon Phi - Intel's competing massively-parallel multicore processor line
- List of AMD graphics processing units
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Smith, Ryan (December 12, 2016). "AMD Announces Radeon Instinct: GPU Accelerators for Deep Learning, Coming in 2017". Anandtech. http://www.anandtech.com/show/10905/amd-announces-radeon-instinct-deep-learning-2017. Cite error: Invalid
<ref>
tag; name "anand" defined multiple times with different content - ↑ 2.0 2.1 Shrout, Ryan (December 12, 2016). "Radeon Instinct Machine Learning GPUs include Vega, Preview Performance". PC Per. https://www.pcper.com/reviews/Graphics-Cards/Radeon-Instinct-Machine-Learning-GPUs-include-Vega-Preview-Performance.
- ↑ WhyCry (December 12, 2016). "AMD announces first VEGA accelerator:RADEON INSTINCT MI25 for deep-learning". https://videocardz.com/64677/amd-announces-first-vega-accelerator-radeon-instinct-mi25-for-deep-learning.
- ↑ Mujtaba, Hassan (June 21, 2017). "AMD Radeon Instinct MI25 Accelerator With 16 GB HBM2 Specifications Detailed – Launches Today Along With Instinct MI8 and Instinct MI6". https://wccftech.com/amd-radeon-instinct-mi25-mi8-mi6-graphics-accelerators/.
- ↑ "Radeon Instinct MI6". AMD. http://instinct.radeon.com/product/mi/radeon-instinct-mi6/.[yes|permanent dead link|dead link}}]
- ↑ "Radeon Instinct MI8". AMD. http://instinct.radeon.com/product/mi/radeon-instinct-mi8/.[yes|permanent dead link|dead link}}]
- ↑ "Radeon Instinct MI25". AMD. http://instinct.radeon.com/product/mi/radeon-instinct-mi25/.[yes|permanent dead link|dead link}}]
- ↑ "AMD CDNA 3 Architecture". AMD. https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/white-papers/amd-cdna-3-white-paper.pdf.
- ↑ "AMD INSTINCT MI300A APU". AMD. https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/data-sheets/amd-instinct-mi300a-data-sheet.pdf.
- ↑ "AMD INSTINCT MI300X APU". AMD. https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/data-sheets/amd-instinct-mi300x-data-sheet.pdf.
- ↑ 11.0 11.1 Kampman, Jeff (December 12, 2016). "AMD opens up machine learning with Radeon Instinct". TechReport. https://techreport.com/review/31093/amd-opens-up-machine-learning-with-radeon-instinct.
- ↑ 12.0 12.1 12.2 Shrout, Ryan (12 December 2016). "Radeon Instinct Machine Learning GPUs include Vega, Preview Performance". PC Per. https://www.pcper.com/reviews/Graphics-Cards/Radeon-Instinct-Machine-Learning-GPUs-include-Vega-Preview-Performance. Retrieved 12 December 2016.
- ↑ 13.0 13.1 13.2 Kampman, Jeff (12 December 2016). "AMD opens up machine learning with Radeon Instinct". TechReport. https://techreport.com/review/31093/amd-opens-up-machine-learning-with-radeon-instinct. Retrieved 12 December 2016.
- ↑ "Radeon Instinct MI6". AMD. http://instinct.radeon.com/product/mi/radeon-instinct-mi6/. Retrieved 22 June 2017.
- ↑ "AMD Radeon Instinct MI6 Specs". https://www.techpowerup.com/gpu-specs/radeon-instinct-mi6.c2927.
- ↑ "Radeon Instinct MI8". AMD. http://instinct.radeon.com/product/mi/radeon-Instinkt-mi8/. Retrieved 22 June 2017.
- ↑ "AMD Radeon Instinct MI8 Specs". https://www.techpowerup.com/gpu-specs/radeon-instinct-mi8.c2928.
- ↑ Smith, Ryan (5 January 2017). "The AMD Vega Architecture Teaser: Higher IPC, Tiling, & More, coming in H1'2017". Anandtech.com. http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser. Retrieved 10 January 2017.
- ↑ "Radeon Instinct MI25". AMD. http://instinct.radeon.com/product/mi/radeon-instinct-mi25/. Retrieved 22 June 2017.
- ↑ "AMD Radeon Instinct MI25 Specs". https://www.techpowerup.com/gpu-specs/radeon-instinct-mi25.c2983.
- ↑ "AMD Radeon Instinct MI25 MxGPU Specs". https://www.techpowerup.com/gpu-specs/radeon-instinct-mi25-mxgpu.c3269.
- ↑ 22.0 22.1 "Next Horizon – David Wang Presentation". AMD. https://www.amd.com/system/files/documents/next_horizon_david_wang_presentation.pdf.
- ↑ "Radeon Instinct MI50". AMD. https://www.amd.com/en/products/professional-graphics/instinct-mi50.
- ↑ "Radeon Instinct MI50 Datasheet". AMD. https://www.amd.com/system/files/documents/radeon-instinct-mi50-datasheet.pdf.
- ↑ "Hands on with the AMD Radeon VII". Jarred Walton. https://www.pcgamer.com/hands-on-with-the-amd-radeon-vii/.
- ↑ "Radeon Instinct MI60". AMD. https://www.amd.com/en/products/professional-graphics/instinct-mi60.
- ↑ "Radeon Instinct MI60 Datasheet". AMD. https://www.amd.com/system/files/documents/radeon-instinct-mi60-datasheet.pdf.
External links
Original source: https://en.wikipedia.org/wiki/AMD Instinct.
Read more |