SUNNYVALE, CA — AMD is bringing the Radeon Instinct family of server accelerators — the MI6, MI8 and MI25 — to heterogeneous computing and HPC systems. Building off the capabilities of AMD FirePro S-Series Server GPUs, Radeon Instinct is raising the bar on achievable performance, efficiencies and the flexibility needed to design datacenters capable of meeting the challenges of today’s data-centric deep learning and HPC workloads.
Radeon Instinct’s support of the open ROCm software platform provides the foundation for world-class datacenter system designs with performance optimized Linux drivers, compilers, tools and libraries; combined with AMD’s secure hardware virtualized MxGPU technologies, enabling customers to change how they design their systems to achieve higher efficiencies and to drive optimized datacenter.
Based on AMD’s next generation “Vega” architecture, the Radeon Instinct MI25 server accelerator is designed for large scale machine intelligence and deep learning datacenter applications. The passively-cooled GPU server card delivers up to 24.6 TFLOPS of half precision compute or 12.3 TFLOPS of single precision compute peak performance with 16GB ultra high-bandwidth HBM2 GPU memory.
The Radeon Instinct MI6 server accelerator is based on the “Polaris” architecture and is a versatile edge training and inference accelerator for machine intelligence and deep learning applications. The Radeon Instinct MI6 brings 5.7 TFLOPS of half or single precision compute peak performance with 16GB of GDDR5 GPU memory and is a cost-effective solution for general purpose HPC-class systems.
The Radeon Instinct MI8 server accelerator, based on the “Fiji" architecture, is an efficient, cost-sensitive server accelerator ideal for datacenter deployments of inference applications for machine intelligence and deep learning. The Radeon Instinct MI8 combines high compute performance, of 8.2 TFLOPS peak half or single precision compute, and exceptional memory performance enabled by HBM1.