built from the ground up for the … · slurm, torque and moab, ibm spectrum lsf monitoring &...

2
Schedule Number: GS-35F-0431N Call our technical sales team at 508.746.7341 www.microway.com Software Integration Experts NVIDIA CUDA® Toolkit installed and configured Intel, PGI with OpenACC and GNU Compilers Linux (CentOS, RHEL, Ubuntu, more) or Windows Server Microway Cluster Management Software (MCMS™) Legendary Service and Technical Support NVIDIA® DGX-1™ with Tesla V100 AI Supercomputing for Your Application 1 Tensor PFLOPS for Deep Learning & AI • 40,960 NVIDIA CUDA Cores • Enhanced NVLink Interconnect (300GB/sec/GPU) 8 Tesla V100 GPU Accelerators Deep Learning & AI Appliance: Ready to run frameworks, Easy to Use Docker-interface Octoputer Server Single Root Complex Highest Capacity Scale up to 140 TFLOPS with 8 or 10 Tesla GPUs on a single PCI-Express tree. Ideal for FFTs, well- optimized GPU apps & those leveraging GPU- Direct Peer-to-Peer. Optional InfiniBand for HPC clustering; high-capacity RAID; ultra-fast NVMe flash storage or Quadro® professional graphics. NumberSmasher® 1U Tesla V100 PCI-E Server Cost-Effective Density Customize to your workload with this powerful, versatile configuration. Supporting up to 4 PCIe GPUs, plus high-speed fabric and storage, these 1U servers offer a compact footprint. With several CPU, GPU, and storage configurations available, our experts optimize these servers for your critical applications. NumberSmasher® 1U with 4 Tesla V100 GPUs GPU-to-GPU NVLink Compute faster with four fully-connected, NVLink- enabled NVIDIA Tesla V100 GPUs. With a balanced configuration of two Intel Xeon Scalable Processors and four Tesla V100 GPUs, this server offers the highest-density platform for HPC & AI clusters. Also available with high-speed fabrics and NVMe storage. IBM® Power AC922 CPU:GPU NVLink, Simplest Programming Accelerate like CORAL with 3 ground-breaking advancements unavailable elsewhere: CPU:GPU coherence (for world’s simplest GPU programming), CPU:GPU NVLink (for 5X the bandwidth), and POWER9 CPUs designed from the ground up for HPC & AI performance. BUILT FROM THE GROUND UP FOR THE INTERSECTION OF AI & HPC NVIDIA® Tesla® V100 and P100 Solutions NVIDIA® Tesla® V100 GPU Accelerator Most Advanced Datacenter GPU Ever Built 7.8TFLOPS FP64 | 15.7 TFLOPS FP32 | 125 TensorTFLOPS • Enhanced NVLink interconnect: 300GB data pipe/GPU 32GB HBM2 Memory, 900 GB/s Memory Bandwidth Leading HPC, DL Training, or DL Inference Performance Full CPU:GPU Memory Coherence in IBM Power AC922 Socketed SXM2 (NVLink) and PCIe 3.0 Form Factors

Upload: dangtuyen

Post on 08-May-2018

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: BUILT FROM THE GROUND UP FOR THE … · SLURM, Torque and Moab, IBM Spectrum LSF Monitoring & Management Software Microway Cluster Management Software (MCMS) MPI Link-Checker™ and

Schedule Number:GS-35F-0431N

Call our technical sales team at 508.746.7341www.microway.com

Software Integration Experts• NVIDIA CUDA® Toolkit installed and con� gured• Intel, PGI with OpenACC and GNU Compilers• Linux (CentOS, RHEL, Ubuntu, more) or Windows Server• Microway Cluster Management Software (MCMS™)• Legendary Service and Technical Support

NVIDIA® DGX-1™ with Tesla V100AI Supercomputing for Your Application

• 1 Tensor PFLOPS for Deep Learning & AI• 40,960 NVIDIA CUDA Cores• Enhanced NVLink

Interconnect (300GB/sec/GPU)• 8 Tesla V100 GPU Accelerators• Deep Learning & AI Appliance:

Ready to run frameworks, Easy to Use Docker-interface

Octoputer™ Server Single Root Complex

Highest CapacityScale up to 140 TFLOPS with 8 or 10 Tesla GPUs on a single PCI-Express tree. Ideal for FFTs, well-optimized GPU apps & those leveraging GPU-Direct Peer-to-Peer. Optional Infi niBand for HPC clustering; high-capacity RAID; ultra-fast NVMe fl ash storage or Quadro® professional graphics.

NumberSmasher® 1U Tesla V100 PCI-E Server

Cost-E� ective DensityCustomize to your workload with this powerful, versatile confi guration. Supporting up to 4 PCIe GPUs, plus high-speed fabric and storage, these 1U servers off er a compact footprint. With several CPU, GPU, and storage confi gurations available, our experts optimize these servers for your critical applications.

NumberSmasher® 1U with 4 Tesla V100 GPUs

GPU-to-GPU NVLinkCompute faster with four fully-connected, NVLink-enabled NVIDIA Tesla V100 GPUs. With a balanced confi guration of two Intel Xeon Scalable Processors and four Tesla V100 GPUs, this server off ers the highest-density platform for HPC & AI clusters. Also available with high-speed fabrics and NVMe storage.

IBM® Power AC922

CPU:GPU NVLink, Simplest ProgrammingAccelerate like CORAL with 3 ground-breaking advancements unavailable elsewhere: CPU:GPU coherence (for world’s simplest GPU programming), CPU:GPU NVLink (for 5X the bandwidth), and POWER9 CPUs designed from the ground up for HPC & AI performance.

BUILT FROM THE GROUND UP FOR THE INTERSECTION OF AI & HPC

NVIDIA® Tesla® V100 and P100 Solutions

NVIDIA® Tesla® V100 GPU AcceleratorMost Advanced Datacenter GPU Ever Built

• 7.8TFLOPS FP64 | 15.7 TFLOPS FP32 | 125 TensorTFLOPS• Enhanced NVLink™ interconnect: 300GB data pipe/GPU• 32GB HBM2 Memory, 900 GB/s Memory Bandwidth• Leading HPC, DL Training, or DL Inference Performance• Full CPU:GPU Memory Coherence in IBM Power AC922• Socketed SXM2 (NVLink) and PCIe 3.0 Form Factors

NVIDIA® Tesla® V100 GPU Accelerator

Page 2: BUILT FROM THE GROUND UP FOR THE … · SLURM, Torque and Moab, IBM Spectrum LSF Monitoring & Management Software Microway Cluster Management Software (MCMS) MPI Link-Checker™ and

www.microway.comCall our technical sales team at 508.746.7341

Tesla GPU Cluster Options

ProcessorIntel® Xeon® Scalable Processors with 4-28 Cores

POWER9 with NVLink 2.0 Processors, 16-24 Cores

Processor Sockets 1P, 2P, 4P

Compute NodesNumberSmasher 1U, 2U, 3U, 4U GPU Servers, IBM Power

Systems® HPC Servers

GPU Options NVIDIA® Tesla® V100, P100 and NVIDIA Quadro®

Visualization Node Option NumberSmasher 1U, 2U, 3U, 4U GPU Servers

Memory per Node 2P: up to 3 TB DDR4 , 4P: Up to 6 TB DDR4

Disk/Media Bays

1U: 2x2.5” Hard Drives,

2U, 4U: 8x3.5” Hard Drives; OctoPuter: 48x 2.5” Hard Drives

Diskless compute node confi gurations available

Cluster InterconnectMellanox® ConnectX®-5 EDR Infi niBand

Intel Omni-Path, 100GigE, 40GigE, 10GigE or Gigabit Ethernet

Management Network IPMI v 2.0 and Redfi sh

Storage OptionsGigabit Ethernet, Infi niBand Storage NAS, Nexsan,

IBM Spectrum Scale, LUSTRE, BeeGFS, or Panasas®

Operating Systems Linux: CentOS, Red Hat, Ubuntu, more; Windows® Server

Compilers CUDA, PGI with OpenACC, Intel and GNU

Cluster SoftwareMVAPICH2, OFED, and OpenMPI;

SLURM, Torque and Moab, IBM Spectrum LSF

Monitoring & Management Software

Microway Cluster Management Software (MCMS)

MPI Link-Checker™ and Infi niScope™

Bright Cluster Manager Standard or Advanced Edition

IBM Spectrum Cluster Manager

Cabinets and Infrastructure APC® NetShelter™ Cabinets, APC PDUs and UPS Protection

ServicesOptional onsite installation, Factory Pre-Installation Service

(Power & network pre-wiring, pre-installation of rails)

Hardware Warranty2 years with advanced replacement parts or return-to-factory

Optional extended warranty term

Tesla GPU Clusters

Microway’s robust, NVIDIA GPU-based clusters off er high-speed interconnects, NVIDIA Quadro and Tesla GPUs, MCMS Remote Cluster Management and Monitoring Tools or the OpenHPC Stack, plus the NVIDIA CUDA® SDK and optional PGI® Compilers.

Microway provides these fully integrated Linux clusters atvery competitive prices. Users worldwide pushing the limits of technology in life sciences, universities, commercial andgovernment research count on our expertise and attention to detail.

Delivering Innovative HPC Solutions Since 1982

Microway Scalable Tesla GPU Clusters