SALT LAKE CITY — SGI is now offering Nvidia Tesla K20 and K20X GPU accelerators across its entire server product line. Completely integrated and tested in SGI's manufacturing facility, the
solutions include SGI Management Center software and options like Performance Suite and SGI InfiniteStorage to make customers productive quickly.
The Tesla K20 family of GPU accelerators are among the highest performance, most efficient accelerators ever built. Based on the Nvidia Kepler compute architecture, the product family includes the Tesla K20X accelerator, the new flagship of the Tesla product line.
The Tesla K20X GPU accelerator can speed up applications by up to 10x when paired with leading CPUs. It features the new GK110 GPU with 2,688 cores, 3.95 teraflops single-precision and 1.31 teraflops double-precision peak processing capability, 6GB of on-board memory, and a memory bandwidth of 250GB/s. The Tesla K20 accelerator delivers more than 3.52 teraflops of single-precision and 1.17 teraflops of double-precision peak performance.
Both accelerators are powered by Nvidia CUDA, the parallel computing platform and programming model, and take advantage of innovative technologies like Dynamic Parallelism and Hyper-Q to boost performance and power efficiency.
SGI's GPU accelerator solutions are offered on the following platforms:
- SGI UV 2000, the "Big Brain Computer," with up to 4,096 cores and 64 TB of coherent main memory for in-memory GPU computing in the world's largest single image system.
- SGI UV 20, the power packed small sibling of the SGI UV 2000, ideal for development or remote office solutions, with greater than 2.5 Teraflops of compute, 1.5 TB of memory, 4 PCIe gen 3 slots and two internal I/O modules, all in a 2U package with four Intel Xeon E5-4600 processors and two NVIDIA Tesla K20 accelerators.
- SGI Rackable twin-socket Intel Xeon servers tailored to meet your exact specifications for high-density, high GPU to CPU ratios, high I/O or high memory.
- SGI ICE X, the latest edition of SGI's award-winning high performance computing scale-out blade server with GPU additions via service nodes.