updated: december 21, 2012 - nvidia.co.uk

112
update Updated: December 21, 2012

Upload: others

Post on 01-Oct-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Updated: December 21, 2012 - nvidia.co.uk

update

Updated: December 21, 2012

Page 2: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported GPU Perf Release Status Notes/Benchmarks

AMBER PMEMD Explicit Solvent & GB

Implicit Solvent

> 100 ns/day

JAC NVE on 2X

K20s

Released

Multi-GPU, multi-node

AMBER 12, GPU Revision Support 12.1

http://ambermd.org/gpus/benchmarks.

htm#Benchmarks

CHARMM Implicit (5x), Explicit (2x)

Solvent via OpenMM

2x C2070 equals

32-35x X5667

CPUs

Released

Single & multi-GPU in single node

Release C37b1;

http://www.charmm.org/news/c37b1.html#po

stjump

DL_POLY Two-body Forces, Link-cell

Pairs, Ewald SPME forces,

Shake VV

4x

Released

V 4.03, Multi-GPU, multi-node

supported

Source only, Results Published

http://www.stfc.ac.uk/CSE/randd/ccg/softwa

re/DL_POLY/25526.aspx

GROMACS Implicit (5x), Explicit (2x)

Solvent via OpenMM

165 ns/Day

DHFR on

4X C2075s

Released (4.5 Single GPU)

Pre-beta (4.6 Multi-GPU)

Release 4.6 is 1st Multi-GPU support

Q4 2012 Release

LAMMPS Lennard-Jones, Gay-Berne,

Tersoff & many more potentials 3.5-18x on Titan

Released.

Multi-GPU, multi-node

http://lammps.sandia.gov/bench.html#desktop

and http://lammps.sandia.gov/bench.html#titan

NAMD Full electrostatics with PME and

most simulation features

4.0 ns/days

F1-ATPase on

1x K20X

Released

100M atom capable

Multi-GPU, multi-node

NAMD 2.9

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Molecular Dynamics (MD) Applications

Page 3: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported GPU Perf Release Status Notes

Abalone Simulations (on 1060 GPU) 4-29X

(on 1060 GPU)

Released, Version 1.8.51

Single GPU Agile Molecule, Inc.

Ascalaph Computation of non-valent

interactions

4-29X

(on 1060 GPU)

Released, Version 1.1.4

Single GPU Agile Molecule, Inc.

ACEMD Written for use only on GPUs 150 ns/day DHFR on

1x K20

Released

Single and multi-GPUs

Production bio-molecular dynamics (MD)

software specially optimized to run on GPUs

Folding@Home Powerful distributed computing

molecular dynamics system;

implicit solvent and folding

Depends upon

number of GPUs

Released;

GPUs and CPUs http://folding.stanford.edu

GPUGrid.net High-performance all-atom

biomolecular simulations;

explicit solvent and binding

Depends upon

number of GPUs

Released;

NVIDIA GPUs only

http://www.gpugrid.net/

GPUs get 4X the points of CPUs

HALMD

Simple fluids and binary

mixtures (pair potentials, high-

precision NVE and NVT, dynamic

correlations)

Up to 66x on 2090

vs. 1 CPU core

Released, Version 0.2.0

Single GPU

http://halmd.org/benchmarks.html#supercool

ed-binary-mixture-kob-andersen

HOOMD-Blue Written for use only on GPUs Kepler 2X faster

than Fermi

Released, Version 0.11.1

Single and multi-GPU on 1 node http://codeblue.umich.edu/hoomd-blue/

OpenMM Implicit and explicit solvent,

custom forces

Implicit: 127-213

ns/day Explicit: 18-

55 ns/day DHFR

Released Version 4.1.1

Multi-GPU

Library and application for molecular dynamics

on high-performance

New/Additional MD Applications Ramping

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 4: Updated: December 21, 2012 - nvidia.co.uk

Application Features Supported GPU Perf Release Status Notes

Abinit

Local Hamiltonian, non-local

Hamiltonian, LOBPCG algorithm,

diagonalization /

orthogonalization

1.3-2.7X Released since Version 6.12

Multi-GPU support

www.abinit.org

ACES III Integrating scheduling GPU into

SIAL programming language and

SIP runtime environment

10X on kernels Under development

Multi-GPU support

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/deumens_ESaccel_2012.pdf

ADF

Fock Matrix, Hessians

TBD

Pilot project completed,

Under development

Multi-GPU support

www.scm.com

BigDFT DFT; Daubechies wavelets,

part of Abinit

5-25X

(1 CPU core to

GPU kernel)

Released June 2009,

current release 1.6

Multi-GPU support

http://inac.cea.fr/L_Sim/BigDFT/news.html,

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/BigDFT-Formalism.pdf and

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/BigDFT-HPC-tues.pdf

Casino

TBD

TBD

Under development,

Spring 2013 release

Multi-GPU support

http://www.tcm.phy.cam.ac.uk/~mdt26/casino.

html

CP2K DBCSR (spare matrix multiply

library) 2-7X

Under development

Multi-GPU support

http://www.olcf.ornl.gov/wp-

content/training/ascc_2012/friday/ACSS_2012_V

andeVondele_s.pdf

GAMESS-US Libqc with Rys Quadrature

Algorithm, Hartree-Fock, MP2

and CCSD in Q4 2012

1.3-1.6X,

2.3-2.9x HF

Released

Multi-GPU support Next release Q4 2012.

http://www.msg.ameslab.gov/gamess/index.html

Quantum Chemistry Applications

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 5: Updated: December 21, 2012 - nvidia.co.uk

Application Features Supported GPU Perf Release Status Notes

GAMESS-UK

(ss|ss) type integrals within

calculations using Hartree Fock ab

initio methods and density

functional theory. Supports

organics & inorganics.

8x Release in 2012

Multi-GPU support

http://www.ncbi.nlm.nih.gov/pubmed/215419

63

Gaussian Joint PGI, NVIDIA & Gaussian

Collaboration TBD

Under development

Multi-GPU support

Announced Aug. 29, 2011

http://www.gaussian.com/g_press/nvidia_press.htm

GPAW

Electrostatic poisson equation,

orthonormalizing of vectors,

residual minimization method

(rmm-diis)

8x

Released

Multi-GPU support

https://wiki.fysik.dtu.dk/gpaw/devel/projects/gpu.html,

Samuli Hakala (CSC Finland) & Chris O’Grady (SLAC)

Jaguar Investigating GPU acceleration TBD

Under development

Multi-GPU support

Schrodinger, Inc.

http://www.schrodinger.com/kb/278

MOLCAS CU_BLAS support 1.1x

Released, Version 7.8

Single GPU. Additional GPU

support coming in Version 8

www.molcas.org

MOLPRO Density-fitted MP2 (DF-MP2),

density fitted local correlation

methods (DF-RHF, DF-KS), DFT

1.7-2.3X

projected

Under development

Multiple GPU

www.molpro.net

Hans-Joachim Werner

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Quantum Chemistry Applications

Page 6: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported GPU Perf Release Status Notes

MOPAC2009 pseudodiagonalization, full

diagonalization, and density

matrix assembling

3.8-14X Under Development

Single GPU

Academic port.

http://openmopac.net

NWChem Triples part of Reg-CCSD(T),

CCSD & EOMCCSD task

schedulers

3-10X projected Release targeting March 2013

Multiple GPUs

Development GPGPU benchmarks:

www.nwchem-sw.org

And http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/Krishnamoorthy-ESCMA12.pdf

Octopus DFT and TDDFT TBD Released http://www.tddft.org/programs/octopus/

PEtot Density functional theory (DFT)

plane wave pseudopotential

calculations

6-10X Released

Multi-GPU

First principles materials code that computes

the behavior of the electron structures of

materials

Q-CHEM RI-MP2 8x-14x Released, Version 4.0 http://www.q-

chem.com/doc_for_web/qchem_manual_4.0.pdf

Quantum Chemistry Applications

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 7: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported GPU Perf Release Status Notes

QMCPACK Main features 3-4x Released

Multiple GPUs

NCSA

University of Illinois at Urbana-Champaign

http://cms.mcc.uiuc.edu/qmcpack/index.php

/GPU_version_of_QMCPACK

Quantum

Espresso/PWscf

PWscf package: linear algebra

(matrix multiply), explicit

computational kernels, 3D FFTs

2.5-3.5x

Released

Version 5.0

Multiple GPUs

Created by Irish Centre for

High-End Computing

http://www.quantum-espresso.org/index.php

and http://www.quantum-espresso.org/

TeraChem “Full GPU-based solution”

44-650X vs.

GAMESS CPU

version

Released

Version 1.5

Multi-GPU/single node

Completely redesigned to

exploit GPU parallelism. YouTube:

http://youtu.be/EJODzk6RFxE?hd=1 and

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/Luehr-ESCMA.pdf

VASP Hybrid Hartree-Fock DFT

functionals including exact

exchange

2x

2 GPUs

comparable to

128 CPU cores

Available on request

Multiple GPUs By Carnegie Mellon University

http://arxiv.org/pdf/1111.0716.pdf

WL-LSMS Generalized Wang-Landau

method

3x

with 32 GPUs vs.

32 (16-core) CPUs

Under development

Multi-GPU support

NICS Electronic Structure Determination Workshop 2012:

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/Eisenbach_OakRidge_February.pdf

Quantum Chemistry Applications

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 8: Updated: December 21, 2012 - nvidia.co.uk

Related

Applications

Features

Supported GPU Perf Release Status Notes

Amira 5® 3D visualization of volumetric

data and surfaces 70x

Released, Version 5.3.3

Single GPU

Visualization from Visage Imaging. Next release, 5.4, will use

GPU for general purpose processing in some functions

http://www.visageimaging.com/overview.html

BINDSURF Allows fast processing of large

ligand databases 100X

Available upon request to

authors; single GPU

High-Throughput parallel blind Virtual Screening,

http://www.biomedcentral.com/1471-2105/13/S14/S13

BUDE Empirical Free

Energy Forcefield 6.5-13.4X

Released

Single GPU University of Bristol

http://www.bris.ac.uk/biochemistry/cpfg/bude/bude.htm

Core Hopping GPU accelerated application 3.75-5000X Released, Suite 2011

Single and multi-GPUs. Schrodinger, Inc.

http://www.schrodinger.com/products/14/32/

FastROCS Real-time shape similarity

searching/comparison 800-3000X

Released

Single and multi-GPUs. Open Eyes Scientific Software

http://www.eyesopen.com/fastrocs

PyMol

Lines: 460% increase

Cartoons: 1246% increase

Surface: 1746% increase

Spheres: 753% increase

Ribbon: 426% increase

1700x Released, Version 1.5

Single GPUs http://pymol.org/

VMD

High quality rendering,

large structures (100 million atoms),

analysis and visualization tasks, multiple

GPU support for display of molecular

orbitals

100-125X or greater

on kernels Released, Version 1.9

Visualization from University of Illinois at Urbana-Champaign

http://www.ks.uiuc.edu/Research/vmd/

Viz, ―Docking‖ and Related Applications Growing

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 9: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported

GPU

Speedup Release Status Website

BarraCUDA Alignment of short sequencing

reads 6-10x

Version 0.6.2 – 3/2012

Multi-GPU, multi-node http://seqbarracuda.sourceforge.net/

CUDASW++ Parallel search of Smith-

Waterman database 10-50x

Version 2.0.8 – Q1/2012

Multi-GPU, multi-node http://sourceforge.net/projects/cudasw/

CUSHAW Parallel, accurate long read

aligner for large genomes 10x

Version 1.0.40 – 6/2012

Multiple-GPU http://cushaw.sourceforge.net/

GPU-BLAST Protein alignment according to

BLASTP 3-4x

Version 2.2.26 – 3/2012

Single GPU

http://eudoxus.cheme.cmu.edu/gpublast/gpu

blast.html

GPU-HMMER Parallel local and global

search of Hidden Markov

Models

60-100x Version 2.3.2 – Q1/2012

Multi-GPU, multi-node

http://www.mpihmmer.org/installguideGPUH

MMER.htm

mCUDA-MEME Scalable motif discovery

algorithm based on MEME 4-10x

Version 3.0.12

Multi-GPU, multi-node

https://sites.google.com/site/yongchaosoftwa

re/mcuda-meme

SeqNFind Hardware and software for

reference assembly, blast, SW,

HMM, de novo assembly

400x Released.

Multi-GPU, multi-node http://www.seqnfind.com/

UGENE Fast short read alignment 6-8x Version 1.11 – 5/2012

Multi-GPU, multi-node http://ugene.unipro.ru/

WideLM Parallel linear regression on

multiple similarly-shaped

models

150x Version 0.1-1 – 3/2012

Multi-GPU, multi-node http://insilicos.com/products/widelm

Bioinformatics Applications

GPU Perf compared against same or similar code running on single CPU machine

Performance measured internally or independently

Page 10: Updated: December 21, 2012 - nvidia.co.uk
Page 11: Updated: December 21, 2012 - nvidia.co.uk

MD Average Speedups The blue node contains Dual E5-2687W CPUs

(8 Cores per CPU).

The green nodes contain Dual E5-2687W CPUs (8

Cores per CPU) and 1 or 2 NVIDIA K10, K20, or

K20X GPUs.

Average speedup calculated from 4 AMBER, 3 NAMD, 3 LAMMPS, and 1 GROMACS test cases. Error bars show the maximum and minimum speedup for each hardware configuration.

0

2

4

6

8

10

CPU CPU + K10 CPU + K20 CPU + K20X CPU + 2x K10 CPU + 2x K20 CPU + 2x K20X

Perf

orm

an

ce R

ela

tiv

e t

o C

PU

On

ly

Page 12: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported GPU Perf Release Status Notes/Benchmarks

AMBER PMEMD Explicit Solvent & GB

Implicit Solvent

> 100 ns/day

JAC NVE on 2X

K20s

Released

Multi-GPU, multi-node

AMBER 12, GPU Revision Support 12.1

http://ambermd.org/gpus/benchmarks.

htm#Benchmarks

CHARMM Implicit (5x), Explicit (2x)

Solvent via OpenMM

2x C2070 equals

32-35x X5667

CPUs

Released

Single & multi-GPU in single node

Release C37b1;

http://www.charmm.org/news/c37b1.html#po

stjump

DL_POLY Two-body Forces, Link-cell

Pairs, Ewald SPME forces,

Shake VV

4x

Released

V 4.03, Multi-GPU, multi-node

supported

Source only, Results Published

http://www.stfc.ac.uk/CSE/randd/ccg/softwa

re/DL_POLY/25526.aspx

GROMACS Implicit (5x), Explicit (2x)

Solvent via OpenMM

165 ns/Day

DHFR on

4X C2075s

Released (4.5 Single GPU)

Pre-beta (4.6 Multi-GPU)

Release 4.6 is 1st Multi-GPU support

Q4 2012 Release

LAMMPS Lennard-Jones, Gay-Berne,

Tersoff & many more potentials 3.5-18x on Titan

Released.

Multi-GPU, multi-node

http://lammps.sandia.gov/bench.html#desktop

and http://lammps.sandia.gov/bench.html#titan

NAMD Full electrostatics with PME and

most simulation features

4.0 ns/days

F1-ATPase on

1x K20X

Released

100M atom capable

Multi-GPU, multi-node

NAMD 2.9

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Molecular Dynamics (MD) Applications

Page 13: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported GPU Perf Release Status Notes

Abalone Simulations (on 1060 GPU) 4-29X

(on 1060 GPU)

Released, Version 1.8.51

Single GPU Agile Molecule, Inc.

Ascalaph Computation of non-valent

interactions

4-29X

(on 1060 GPU)

Released, Version 1.1.4

Single GPU Agile Molecule, Inc.

ACEMD Written for use only on GPUs 150 ns/day DHFR on

1x K20

Released

Single and multi-GPUs

Production bio-molecular dynamics (MD)

software specially optimized to run on GPUs

Folding@Home Powerful distributed computing

molecular dynamics system;

implicit solvent and folding

Depends upon

number of GPUs

Released;

GPUs and CPUs http://folding.stanford.edu

GPUGrid.net High-performance all-atom

biomolecular simulations;

explicit solvent and binding

Depends upon

number of GPUs

Released;

NVIDIA GPUs only

http://www.gpugrid.net/

GPUs get 4X the points of CPUs

HALMD

Simple fluids and binary

mixtures (pair potentials, high-

precision NVE and NVT, dynamic

correlations)

Up to 66x on 2090

vs. 1 CPU core

Released, Version 0.2.0

Single GPU

http://halmd.org/benchmarks.html#supercool

ed-binary-mixture-kob-andersen

HOOMD-Blue Written for use only on GPUs Kepler 2X faster

than Fermi

Released, Version 0.11.1

Single and multi-GPU on 1 node http://codeblue.umich.edu/hoomd-blue/

OpenMM Implicit and explicit solvent,

custom forces

Implicit: 127-213

ns/day Explicit: 18-

55 ns/day DHFR

Released Version 4.1.1

Multi-GPU

Library and application for molecular dynamics

on high-performance

New/Additional MD Applications Ramping

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 14: Updated: December 21, 2012 - nvidia.co.uk

Built from Ground Up for GPUs Computational Chemistry

What

Why

How

Study disease & discover drugs

Predict drug and protein interactions

Speed of simulations is critical

Enables study of: Longer timeframes

Larger systems

More simulations

GPUs increase throughput & accelerate simulations

AMBER 11 Application

4.6x performance increase with 2 GPUs with

only a 54% added cost*

• AMBER 11 Cellulose NPT on 2x E5670 CPUS + 2x Tesla C2090s (per node) vs. 2xcE5670 CPUs (per node)

• Cost of CPU node assumed to be $9333. Cost of adding two (2) 2090s to single node is assumed to be $5333

GPU READY

APPLICATIONS

Abalone

ACEMD

AMBER

DL_PLOY

GAMESS

GROMACS

LAMMPS

NAMD

NWChem

Q-CHEM

Quantum Espresso

TeraChem

Page 15: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012 15

AMBER 12

GPU Support Revision 12.1

12/21/2012

Page 16: Updated: December 21, 2012 - nvidia.co.uk

Protein Folding Simulation With AMBER Accelerated By GPUs

80 ns/day

4 core CPU

367 ns/day

4 core CPU + Tesla M2070 GPU

4.7x Faster

Data courtesy of AMBER.org

Page 17: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012 17

3.42

11.85

18.90

22.44

25.39

0.00

5.00

10.00

15.00

20.00

25.00

30.00

1 CPU Node 1 CPU Node +M2090

1 CPU Node + K10 1 CPU Node + K20 1 CPU Node + K20X

Nan

oseco

nd

s /

Day

Factor IX

Kepler - Our Fastest Family of GPUs Yet

Running AMBER 12 GPU Support Revision 12.1

The blue node contains Dual E5-2687W CPUs

(8 Cores per CPU).

The green nodes contain Dual E5-2687W CPUs (8

Cores per CPU) and either 1x NVIDIA M2090, 1x K10

or 1x K20 for the GPU

GPU speedup/throughput increased from 3.5x (with M2090) to 7.4x (with K20X)

when compared to a CPU only node

Factor IX

3.5x

5.6x

6.6x

7.4x

Page 18: Updated: December 21, 2012 - nvidia.co.uk

K10 Accelerates Simulations of All Sizes

Gain 24x performance by adding just 1 GPU

when compared to dual CPU performance Nucleosome

2.00

5.50 5.53 5.04

19.98

24.00

0

5

10

15

20

25

30

CPUAll Molecules

TRPcageGB

JAC NVEPME

Factor IX NVEPME

Cellulose NVEPME

MyoglobinGB

NucleosomeGB

Sp

eed

up

Co

mp

are

d t

o C

PU

On

ly

Running AMBER 12 GPU Support Revision 12.1

The blue node contains Dual E5-2687W CPUs

(8 Cores per CPU).

The green nodes contain Dual E5-2687W CPUs (8

Cores per CPU) and 1x NVIDIA K10 GPU

Page 19: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012 19

K20 Accelerates Simulations of All Sizes

Running AMBER 12 GPU Support Revision 12.1

SPFP with CUDA 4.2.9 ECC Off

The blue node contains 2x Intel E5-2687W CPUs

(8 Cores per CPU)

Each green nodes contains 2x Intel E5-2687W

CPUs (8 Cores per CPU) plus 1x NVIDIA K20 GPUs

Gain 28x throughput/performance by adding just one K20 GPU

when compared to dual CPU performance Nucleosome

1.00

2.66

6.50 6.56 7.28

25.56

28.00

0.00

5.00

10.00

15.00

20.00

25.00

30.00

CPU AllMolecules

TRPcage GB JAC NVE PME Factor IX NVEPME

Cellulose NVEPME

Myoglobin GB NucleosomeGB

Sp

eed

up

Co

mp

are

d t

o C

PU

On

ly

Page 20: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012 20

K20X Accelerates Simulations of All Sizes

Gain 31x performance by adding just one K20X GPU

when compared to dual CPU performance Nucleosome

Running AMBER 12 GPU Support Revision 12.1

The blue node contains Dual E5-2687W CPUs

(8 Cores per CPU).

The green nodes contain Dual E5-2687W CPUs (8

Cores per CPU) and 1x NVIDIA K20X GPU

2.79

7.15 7.43 8.30

28.59

31.30

0

5

10

15

20

25

30

35

CPUAll Molecules

TRPcageGB

JAC NVEPME

Factor IX NVEPME

Cellulose NVEPME

MyoglobinGB

NucleosomeGB

Sp

eed

up

Co

mp

are

d t

o C

PU

On

ly

Page 21: Updated: December 21, 2012 - nvidia.co.uk

K10 Strong Scaling over Nodes

0

1

2

3

4

5

6

1 2 4

Nan

oseco

nd

s /

Day

Number of Nodes

Cellulose 408K Atoms (NPT)

CPU Only

With GPU

Running AMBER 12 with CUDA 4.2 ECC Off

The blue nodes contains 2x Intel X5670

CPUs (6 Cores per CPU)

The green nodes contains 2x Intel X5670

CPUs (6 Cores per CPU) plus 2x NVIDIA

K10 GPUs

5.1x

3.6x

2.4x

GPUs significantly outperform CPUs while scaling over multiple nodes

Cellulose

Page 22: Updated: December 21, 2012 - nvidia.co.uk

Kepler – Universally Faster

The Kepler GPUs accelerated all simulations, up to 8x

0

1

2

3

4

5

6

7

8

9

CPU Only CPU + K10 CPU + K20 CPU + K20X

Sp

eed

up

s C

om

pare

d t

o C

PU

On

ly

JAC

Factor IX

Cellulose

Cellulose

Running AMBER 12 GPU Support Revision 12.1

The CPU Only node contains Dual E5-2687W CPUs

(8 Cores per CPU).

The Kepler nodes contain Dual E5-2687W CPUs (8

Cores per CPU) and 1x NVIDIA K10, K20, or K20X

GPUs

Page 23: Updated: December 21, 2012 - nvidia.co.uk

Gain 7.8X performance by adding just 2 GPUs

when compared to dual CPU performance

K10 Extreme Performance

DHFR

12.47

97.99

0

20

40

60

80

100

120

1 Node 1 Node

Nan

oseco

nd

s /

Day

JAC 23K Atoms (NVE) Running AMBER 12 GPU Support Revision 12.1

The blue node contains Dual E5-2687W CPUs

(8 Cores per CPU).

The green node contain Dual E5-2687W CPUs (8

Cores per CPU) and 2x NVIDIA K10 GPUs

Page 24: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012 24

Gain > 7.5X throughput/performance by adding just 2 K20 GPUs

when compared to dual CPU performance

K20 Extreme Performance

DHFR

Running AMBER 12 GPU Support Revision 12.1

SPFP with CUDA 4.2.9 ECC Off

The blue node contains 2x Intel E5-2687W CPUs

(8 Cores per CPU)

Each green node contains 2x Intel E5-2687W

CPUs (8 Cores per CPU) plus 2x NVIDIA K20 GPU

12.47

95.59

0

20

40

60

80

100

120

1 Node 1 Node

Nan

oseco

nd

s /

Day

DHRF JAC 23K Atoms (NVE)

Page 25: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012 25

Replace 8 Nodes with 1 K20 GPU

Cut down simulation costs to ¼ and gain higher performance

Running AMBER 12 GPU Support Revision 12.1

SPFP with CUDA 4.2.9 ECC Off

The eight (8) blue nodes each contain 2x Intel

E5-2687W CPUs (8 Cores, $2000 per CPU)

Each green node contains 2x Intel E5-2687W

CPUs (8 Cores per CPU) plus 1x NVIDIA K20

GPU ($2500 per GPU)

65.00

81.09 $32,000.00

$6,500.00

0

5000

10000

15000

20000

25000

30000

35000

0.00

10.00

20.00

30.00

40.00

50.00

60.00

70.00

80.00

90.00

Nanoseconds/Day Cost

DHFR

Page 26: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012 26

Replace 7 Nodes with 1 K10 GPU

Cut down simulation costs to ¼ and increase performance by 70%

Running AMBER 12 GPU Support Revision 12.1

SPFP with CUDA 4.2.9 ECC Off

The eight (8) blue nodes each contain 2x Intel

E5-2687W CPUs (8 Cores, $2000 per CPU)

The green node contains 2x Intel E5-2687W

CPUs (8 Cores per CPU) plus 1x NVIDIA K10

GPU ($3000 per GPU)

DHFR

0

5000

10000

15000

20000

25000

30000

35000

CPU Only GPU Enabled

Cost

0

10

20

30

40

50

60

70

80

CPU Only GPU Enabled

Nan

oseco

nd

s /

Day

Performance on JAC NVE

Page 27: Updated: December 21, 2012 - nvidia.co.uk

When used with GPUs, dual CPU sockets perform worse than single CPU sockets.

Extra CPUs decrease Performance Running AMBER 12 GPU Support Revision 12.1

The orange bars contains one E5-2687W CPUs

(8 Cores per CPU).

The blue bars contain Dual E5-2687W CPUs (8

Cores per CPU)

0

1

2

3

4

5

6

7

8

CPU Only CPU with dual K20s

Nan

oseco

nd

s /

Day

Cellulose NVE

1 E5-2687W

2 E5-2687W

Cellulose

Page 28: Updated: December 21, 2012 - nvidia.co.uk

Kepler - Greener Science

The GPU Accelerated systems use 65-75% less energy

Energy Expended

= Power x Time

Lower is better

0

500

1000

1500

2000

2500

CPU Only CPU + K10 CPU + K20 CPU + K20X

En

erg

y E

xp

en

ded

(kJ)

Energy used in simulating 1 ns of DHFR JAC Running AMBER 12 GPU Support Revision 12.1

The blue node contains Dual E5-2687W CPUs

(150W each, 8 Cores per CPU).

The green nodes contain Dual E5-2687W CPUs (8

Cores per CPU) and 1x NVIDIA K10, K20, or K20X

GPUs (235W each).

Page 29: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012

Recommended GPU Node Configuration for AMBER Computational Chemistry

Workstation or Single Node Configuration

# of CPU sockets 2

Cores per CPU socket 4+ (1 CPU core drives 1 GPU)

CPU speed (Ghz) 2.66+

System memory per node (GB) 16

GPUs Kepler K10, K20, K20X

Fermi M2090, M2075, C2075

# of GPUs per CPU socket

1-2

(4 GPUs on 1 socket is good

to do 4 fast serial GPU runs)

GPU memory preference (GB) 6

GPU to CPU connection PCIe 2.0 16x or higher

Server storage 2 TB

Network configuration Infiniband QDR or better

Scale to multiple nodes with same single node configuration 29

Page 30: Updated: December 21, 2012 - nvidia.co.uk

AMBER Benchmark Report, Revision 2.0, dated Nov. 5, 2012 30

Benefits of GPU AMBER Accelerated Computing

Faster than CPU only systems in all tests

Most major compute intensive aspects of classical MD ported

Large performance boost with marginal price increase

Energy usage cut by more than half

GPUs scale well within a node and over multiple nodes

K20 GPU is our fastest and lowest power high performance GPU yet

Try GPU accelerated AMBER for free – www.nvidia.com/GPUTestDrive

Page 31: Updated: December 21, 2012 - nvidia.co.uk

NAMD 2.9

Page 32: Updated: December 21, 2012 - nvidia.co.uk

NAMD Benchmark Report, Revision 2.0, dated Nov. 5, 2012 32

Kepler - Our Fastest Family of GPUs Yet

Running NAMD version 2.9

The blue node contains Dual E5-2687W CPUs

(8 Cores per CPU).

The green nodes contain Dual E5-2687W CPUs (8

Cores per CPU) and either 1x NVIDIA M2090, 1x K10

or 1x K20 for the GPU

GPU speedup/throughput increased from 1.9x (with M2090) to 2.9x (with K20X)

when compared to a CPU only node

Apolipoprotein A1

1.37

2.63

3.45 3.57

4.00

0.00

0.50

1.00

1.50

2.00

2.50

3.00

3.50

4.00

4.50

1 CPU Node 1 CPU Node +M2090

1 CPU Node + K10 1 CPU Node + K20 1 CPU Node + K20X

Nan

oseco

nd

s/D

ay

ApoA1

1.9x

2.5x

2.6x

2.9x

Page 33: Updated: December 21, 2012 - nvidia.co.uk

NAMD Benchmark Report, Revision 2.0, dated Nov. 5, 2012 33

Accelerates Simulations of All Sizes Running NAMD 2.9 with CUDA 4.0 ECC Off

The blue node contains 2x Intel E5-2687W CPUs

(8 Cores per CPU)

Each green node contains 2x Intel E5-2687W

CPUs (8 Cores per CPU) plus 1x NVIDIA K20 GPUs

Gain 2.5x throughput/performance by adding just 1 GPU

when compared to dual CPU performance

2.6

2.4

2.7

0

0.5

1

1.5

2

2.5

3

CPU All Molecules ApoA1 F1-ATPase STMV

Sp

eed

up

Co

mp

are

d t

o C

PU

On

ly

Apolipoprotein A1

Page 34: Updated: December 21, 2012 - nvidia.co.uk

Kepler – Universally Faster

The Kepler GPUs accelerate all simulations, up to 5x Average acceleration printed in bars

Running NAMD version 2.9

The CPU Only node contains Dual E5-2687W CPUs

(8 Cores per CPU).

The Kepler nodes contain Dual E5-2687W CPUs (8

Cores per CPU) and 1 or two NVIDIA K10, K20, or

K20X GPUs.

0

1

2

3

4

5

6

CPU Only 1x K10 1x K20 1x K20X 2x K10 2x K20 2x K20X

Sp

eed

up

Co

mp

are

d t

o C

PU

On

ly

F1-ATPase

ApoA1

STMV

F1-ATPase

| Kepler nodes use Dual CPUs |

2.4x

4.7x

2.9x 2.6x

4.3x

5.1x

Page 35: Updated: December 21, 2012 - nvidia.co.uk

Outstanding Strong Scaling with Multi-STMV

0

0.2

0.4

0.6

0.8

1

1.2

32 64 128 256 512 640 768

Nan

oseco

nd

s /

Day

# of Nodes

100 STMV on Hundreds of Nodes

Fermi XK6

CPU XK6

3.8x

2.7x

2.9x

3.6x

Running NAMD version 2.9

Each blue XE6 CPU node contains 1x AMD

1600 Opteron (16 Cores per CPU).

Each green XK6 CPU+GPU node contains

1x AMD 1600 Opteron (16 Cores per CPU)

and an additional 1x NVIDIA X2090 GPU.

Concatenation of 100

Satellite Tobacco Mosaic Virus

Accelerate your science by 2.7-3.8x when compared to CPU-based supercomputers

Page 36: Updated: December 21, 2012 - nvidia.co.uk

Replace 3 Nodes with 1 2090 GPU

Speedup of 1.2x for 50% the cost

Running NAMD version 2.9

Each blue node contains 2x Intel Xeon X5550 CPUs

(4 Cores, $1000 per CPU).

The green node contains 2x Intel Xeon X5550 CPUs

(4 Cores, $1000 per CPU) and 1x NVIDIA M2090 GPU

($2000 each)

F1-ATPase

0.63

0.74 $8,000

$4,000

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Nanoseconds/Day

F1-ATPase 4 CPU Nodes

1 CPU Node +1x M2090 GPUs

Cost

Page 37: Updated: December 21, 2012 - nvidia.co.uk

NAMD Benchmark Report, Revision 2.0, dated Nov. 5, 2012 37

K20 - Greener: Twice The Science Per Watt

Running NAMD version 2.9

Each blue node contains Dual E5-2687W

CPUs (95W, 4 Cores per CPU).

Each green node contains 2x Intel Xeon X5550

CPUs (95W, 4 Cores per CPU) and 2x NVIDIA

K20 GPUs (225W per GPU)

Energy Expended

= Power x Time

Lower is better

Cut down energy usage by ½ with GPUs

0

200000

400000

600000

800000

1000000

1200000

1 Node 1 Node + 2x K20

En

erg

y E

xp

en

ded

(kJ)

Energy Used in Simulating 1 Nanosecond of ApoA1

Page 38: Updated: December 21, 2012 - nvidia.co.uk

Kepler - Greener: Twice The Science/Joule

Running NAMD version 2.9

The blue node contains Dual E5-2687W CPUs

(150W each, 8 Cores per CPU).

The green nodes contain Dual E5-2687W CPUs

(8 Cores per CPU) and 2x NVIDIA K10, K20, or

K20X GPUs (235W each).

Energy Expended

= Power x Time

Lower is better

Cut down energy usage by ½ with GPUs

0

50000

100000

150000

200000

250000

CPU Only CPU + 2 K10s CPU + 2 K20s CPU + 2 K20Xs

En

erg

y E

xp

en

ded

(kJ)

Energy used in simulating 1 ns of SMTV

Satellite Tobacco Mosaic Virus

Page 39: Updated: December 21, 2012 - nvidia.co.uk

NAMD Benchmark Report, Revision 2.0, dated Nov. 5, 2012

Recommended GPU Node Configuration for NAMD Computational Chemistry

Workstation or Single Node Configuration

# of CPU sockets 2

Cores per CPU socket 6+

CPU speed (Ghz) 2.66+

System memory per socket (GB) 32

GPUs Kepler K10, K20, K20X

Fermi M2090, M2075, C2075

# of GPUs per CPU socket 1-2

GPU memory preference (GB) 6

GPU to CPU connection PCIe 2.0 or higher

Server storage 500 GB or higher

Network configuration Gemini, InfiniBand

Scale to multiple nodes with same single node configuration 39

Page 40: Updated: December 21, 2012 - nvidia.co.uk

NAMD Benchmark Report, Revision 2.0, dated Nov. 5, 2012 40

Summary/Conclusions Benefits of GPU Accelerated Computing

Faster than CPU only systems in all tests

Large performance boost with small marginal price increase

Energy usage cut in half

GPUs scale very well within a node and over multiple nodes

Tesla K20 GPU is our fastest and lowest power high performance GPU to date

Page 41: Updated: December 21, 2012 - nvidia.co.uk

LAMMPS, Dec. 2011 or later

Page 42: Updated: December 21, 2012 - nvidia.co.uk

More Science for Your Money

1.7

2.47

2.92

3.3

4.5

5.5

0

1

2

3

4

5

6

CPU Only CPU + 1xK10

CPU + 1xK20

CPU + 1xK20X

CPU + 2xK10

CPU + 2xK20

CPU + 2xK20X

Sp

eed

up

Co

mp

are

d t

o C

PU

On

ly

Embedded Atom Model Blue node uses 2x E5-2687W (8 Cores

and 150W per CPU).

Green nodes have 2x E5-2687W and 1

or 2 NVIDIA K10, K20, or K20X GPUs (235W).

Experience performance increases of up to 5.5x with Kepler GPU nodes.

Page 43: Updated: December 21, 2012 - nvidia.co.uk

K20X, the Fastest GPU Yet Blue node uses 2x E5-2687W (8 Cores

and 150W per CPU).

Green nodes have 2x E5-2687W and 2

NVIDIA M2090s or K20X GPUs (235W).

Experience performance increases of up to 6.2x with Kepler GPU nodes. One K20X performs as well as two M2090s

0

1

2

3

4

5

6

7

CPU Only CPU + 2x M2090 CPU + K20X CPU + 2x K20X

Sp

eed

up

Rela

tiv

e t

o C

PU

Alo

ne

Page 44: Updated: December 21, 2012 - nvidia.co.uk

Get a CPU Rebate to Fund Part of Your GPU Budget

Increase performance 18x when compared to CPU-only nodes

5.31

9.88

12.9

18.2

0

2

4

6

8

10

12

14

16

18

20

1 Node 1 Node + 1x M2090 1 Node + 2x M2090 1 Node + 3x M2090 1 Node + 4x M2090

No

rmalized

to

CP

U O

nly

Acceleration in Loop Time Computation by Additional GPUs

Cheaper CPUs used with GPUs AND still faster overall performance when

compared to more expensive CPUs!

Running NAMD version 2.9

The blue node contains Dual X5670 CPUs

(6 Cores per CPU).

The green nodes contain Dual X5570 CPUs

(4 Cores per CPU) and 1-4 NVIDIA M2090

GPUs.

Page 45: Updated: December 21, 2012 - nvidia.co.uk

Excellent Strong Scaling on Large Clusters

0

100

200

300

400

500

600

300 400 500 600 700 800 900

Lo

op

Tim

e (

seco

nd

s)

Nodes

LAMMPS Gay-Berne 134M Atoms

GPU Accelerated XK6

CPU only XE6

Each blue Cray XE6 Nodes have 2x AMD Opteron CPUs (16 Cores per CPU)

Each green Cray XK6 Node has 1x AMD Opteron 1600 CPU (16 Cores per CPU) and 1x NVIDIA X2090

From 300-900 nodes, the NVIDIA GPU-powered XK6 maintained 3.5x performance

compared to XE6 CPU nodes

3.55x

3.45x 3.48x

Page 46: Updated: December 21, 2012 - nvidia.co.uk

GPUs Sustain 5x Performance for Weak Scaling

0

5

10

15

20

25

30

35

40

45

1 8 27 64 125 216 343 512 729

Lo

op

Tim

e (

seco

nd

s)

Nodes

Weak Scaling with 32K Atoms per Node

6.7x

Performance of 4.8x-6.7x with GPU-accelerated nodes

when compared to CPUs alone

4.8x

Each blue Cray XE6 Node have 2x AMD Opteron CPUs (16 Cores per CPU)

Each green Cray XK6 Node has 1x AMD Opteron 1600 CPU (16 Core per CPU) and 1x NVIDIA X2090

5.8x

Page 47: Updated: December 21, 2012 - nvidia.co.uk

Faster, Greener — Worth It!

Energy Expended = Power x Time

Lower is better

GPU-accelerated computing uses

53% less energy than CPU only

Power calculated by combining the component’s TDPs

Blue node uses 2x E5-2687W (8 Cores and 150W per CPU) and CUDA 4.2.9.

Green nodes have 2x E5-2687W and 1 or 2 NVIDIA K20X GPUs (235W) running CUDA 5.0.36.

0

20

40

60

80

100

120

140

1 Node 1 Node + 1 K20X 1 Node + 2x K20X

En

erg

y E

xp

en

ded

(kJ)

Energy Consumed in one loop of EAM

Page 48: Updated: December 21, 2012 - nvidia.co.uk

Molecular Dynamics with LAMMPS on a Hybrid Cray Supercomputer

W. Michael Brown National Center for Computational Sciences

Oak Ridge National Laboratory

NVIDIA Technology Theater, Supercomputing 2012 November 14, 2012

Page 49: Updated: December 21, 2012 - nvidia.co.uk

0.0

0.5

1.0

1.5

2.0

2.5

3.0

1 4 16

64256

1024

4096

16384

Time(s)

Nodes

XK7+GPU

XK6

XK6+GPU

0

1

2

3

4

1 4 16

64256

1024

4096

16384

Time(s)

Nodes

XK7+GPU

XK6

XK6+GPU

XK7+GPU

XK6

XK6+GPU

0.030.060.130.250.501.002.004.008.00

16.0032.00

1 2 4 8 16 32 64 128

Time(s)

0.13

0.25

0.50

1.00

2.00

4.00

8.00

1 2 4 8 16 32 64 128

Time(s)

Early Kepler Benchmarks on Titan

Atomic Fluid

Bulk Copper

Page 50: Updated: December 21, 2012 - nvidia.co.uk

0

2

4

6

8

10

12

14

16

1 4 16

64256

1024

4096

16384

Time(s)

Nodes

XK7+GPU

XK6

XK6+GPU

0.130.250.501.002.004.008.00

16.0032.0064.00

128.00

1 2 4 8 16 32 64 128

Time(s)

1

2

4

8

16

32

1

4

16

64

256

1024

4096

16384

Time(s)

Nodes

XK7+GPU

XK6

XK6+GPU

Protein

Liquid Crystal

0.50

1.00

2.00

4.00

8.00

16.00

32.00

64.00

1 2 4 8 16 32 64 128

Time(s)

Early Kepler Benchmarks on Titan

Page 51: Updated: December 21, 2012 - nvidia.co.uk

Early Titan XK6/XK7 Benchmarks

Atomic Fluid (cutoff = 2.5σ)

Atomic Fluid (cutoff = 5.0σ)

Bulk Copper Protein Liquid Crystal

XK6 (1 Node) 1.92 4.33 2.12 2.6 5.82

XK7 (1 Node) 2.90 8.38 3.66 3.36 15.70

XK6 (900 Nodes) 1.68 3.96 2.15 1.56 5.60

XK7 (900 Nodes) 2.75 7.48 2.86 1.95 10.14

0

2

4

6

8

10

12

14

16

18Speedup with Acceleration on XK6/XK7 Nodes

1 Node = 32K Particles 900 Nodes = 29M Particles

Page 52: Updated: December 21, 2012 - nvidia.co.uk

Recommended GPU Node Configuration for LAMMPS Computational Chemistry

Workstation or Single Node Configuration

# of CPU sockets 2

Cores per CPU socket 6+

CPU speed (Ghz) 2.66+

System memory per socket (GB) 32

GPUs Kepler K10, K20, K20X

Fermi M2090, M2075, C2075

# of GPUs per CPU socket 1-2

GPU memory preference (GB) 6

GPU to CPU connection PCIe 2.0 or higher

Server storage 500 GB or higher

Network configuration Gemini, InfiniBand

Scale to multiple nodes with same single node configuration 52

Page 53: Updated: December 21, 2012 - nvidia.co.uk

GROMACS 4.6 Pre-Beta and 4.6 Beta

Page 54: Updated: December 21, 2012 - nvidia.co.uk

Great Scaling in Small Systems

Running GROMACS 4.6 pre-beta with CUDA 4.1

Each blue node contains 1x Intel X5550 CPU

(95W TDP, 4 Cores per CPU)

Each green node contains 1x Intel X5550 CPU

(95W TDP, 4 Cores per CPU) and 1x NVIDIA

M2090 (225W TDP per GPU)

Get up to 3.7x performance compared to CPU-only nodes

3.2x

3.6x 8.36

13.01

21.68

0.00

5.00

10.00

15.00

20.00

25.00

1 2 3

Nan

oseco

nd

s /

Day

Number of Nodes

CPU Only

With GPU

3.7x

3.6x

3.2x

Benchmark systems: RNAse in water

with 16,816 atoms in truncated

dodecahedron box

Page 55: Updated: December 21, 2012 - nvidia.co.uk

Additional Strong Scaling on Larger System

0

20

40

60

80

100

120

140

160

8 16 32 64 128

Nan

oseco

nd

s /

Day

Number of Nodes

128K Water Molecules

CPU Only

With GPU

Running GROMACS 4.6 pre-beta with CUDA 4.1

Each blue node contains 1x Intel X5670 (95W

TDP, 6 Cores per CPU)

Each green node contains 1x Intel X5670 (95W

TDP, 6 Cores per CPU) and 1x NVIDIA M2070

(225W TDP per GPU)

Up to 128 nodes, NVIDIA GPU-accelerated nodes deliver 2-3x performance

when compared to CPU-only nodes

2x

3.1x

2.8x

Page 56: Updated: December 21, 2012 - nvidia.co.uk

Replace 3 Nodes with 2 GPUs

Running GROMACS 4.6 pre-beta with CUDA 4.1

The blue node contains 2x Intel X5550 CPUs

(95W TDP, 4 Cores, $1000 per CPU)

The green node contains 2x Intel X5550 CPUs

(95W TDP, 4 Cores, $1000 per CPU) and 2x

NVIDIA M2090s as the GPU (225W TDP, $2000

per GPU)

Save thousands of dollars and perform 25% faster

6.7

8.36 $8,000

$6,500

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

0

1

2

3

4

5

6

7

8

9

Nanoseconds/Day

ADH in Water (134K Atoms)

4 CPU Nodes

1 CPU Node +2x M2090 GPUs

Cost

Page 57: Updated: December 21, 2012 - nvidia.co.uk

Greener Science

Running GROMACS 4.6 with CUDA 4.1

The blue nodes contain 2x Intel X5550 CPUs

(95W TDP, 4 Cores per CPU)

The green node contains 2x Intel X5550 CPUs,

4 Cores per CPU) and 2x NVIDIA M2090s GPUs

(225W TDP per GPU)

In simulating each nanosecond, the GPU-accelerated system uses 33% less energy

Energy Expended

= Power x Time

Lower is better

0

2000

4000

6000

8000

10000

12000

4 Nodes(760 Watts)

1 Node + 2x M2090(640 Watts)

En

erg

y E

xp

en

ded

(K

ilo

Jo

ule

s C

on

su

med

)

ADH in Water (134K Atoms)

Page 58: Updated: December 21, 2012 - nvidia.co.uk

The Power of Kepler

Running GROMACS version 4.6 beta

The grey nodes contain 1 or 2 E5-2687W CPUs

(150W each, 8 Cores per CPU) and 1 or 2

NVIDIA M2090s.

The green nodes contain 1 or 2 E5-2687W

CPUs (8 Cores per CPU) and 1 or 2 NVIDIA

K20X GPUs (235W each).

Upgrading an M2090 to a K20X increases performance 10-45%

Ribonuclease

0

20

40

60

80

100

120

140

1 CPU + 1 GPU 1 CPU + 2 GPU 2 CPU + 1 GPU 2 CPU + 2 GPU

RNase Solvated Protein 24k Atoms

M2090

K20X

Page 59: Updated: December 21, 2012 - nvidia.co.uk

K20X – Fast

0

20

40

60

80

100

120

1 CPU 2 CPUs

Nan

oseco

nd

s /

Day

RNase Solvated Protein 24k Atoms

CPU Only

With 1 K20X

Running GROMACS version 4.6 beta

The blue nodes contain 1 or 2 E5-2687W CPUs

(150W each, 8 Cores per CPU).

The green nodes contain 1 or 2 E5-2687W

CPUs (8 Cores per CPU) and 1 or 2 NVIDIA

K20X GPUs (235W each).

Adding a K20X increases performance by up to 3x

Ribonuclease

Page 60: Updated: December 21, 2012 - nvidia.co.uk

K20X, the Fastest Yet

Running GROMACS version 4.6-beta2 and

CUDA 5.0.35

The blue node contains 2 E5-2687W CPUs

(150W each, 8 Cores per CPU).

The green nodes contain 2 E5-2687W CPUs (8

Cores per CPU) and 1 or 2 NVIDIA K20X GPUs

(235W each).

Using K20X nodes increases performance by 2.5x

Water

0

2

4

6

8

10

12

14

16

CPU CPU + K20X CPU + 2x K20X

Nan

oseco

nd

s /

Day

192K Water Molecules

Page 61: Updated: December 21, 2012 - nvidia.co.uk

Recommended GPU Node Configuration for GROMACS Computational Chemistry

Workstation or Single Node Configuration

# of CPU sockets 2

Cores per CPU socket 6+

CPU speed (Ghz) 2.66+

System memory per socket (GB) 32

GPUs Kepler K10, K20, K20X

Fermi M2090, M2075, C2075

# of GPUs per CPU socket

1x

Kepler-based GPUs (K20X, K20 or K10): need fast Sandy

Bridge or perhaps the very fastest Westmeres, or high-end

AMD Opterons

GPU memory preference (GB) 6

GPU to CPU connection PCIe 2.0 or higher

Server storage 500 GB or higher

Network configuration Gemini, InfiniBand

Scale to multiple nodes with same single node configuration 61

Page 62: Updated: December 21, 2012 - nvidia.co.uk

CHARMM Release C37b1

Page 63: Updated: December 21, 2012 - nvidia.co.uk

GPUs Outperform CPUs

Running CHARMM release C37b1

The blue nodes contains 44 X5667 CPUs

(95W, 4 Cores per CPU, $1000).

The green nodes contain 2 X5667 CPUs and 1

or 2 NVIDIA C2070 GPUs (238W, $1000 each).

1 GPU = 15 CPUs

$1000 << $15,000

0

10

20

30

40

50

60

70

44x X5667$44,000

2x X5667 + 1x C2070$3000

2x X5667 + 2x C2070$4000

Nan

oseco

nd

s /

Day

Daresbury Crambin 19.6k Atoms

Page 64: Updated: December 21, 2012 - nvidia.co.uk

More Bang for your Buck

Running CHARMM release C37b1

The blue nodes contains 44 X5667 CPUs

(95W, 4 Cores per CPU, $1000).

The green nodes contain 2 X5667 CPUs and 1

or 2 NVIDIA C2070 GPUs (238W, $1000 each).

Using GPUs delivers 10.6x the performance for the same cost

0

2

4

6

8

10

12

44x X5667 2x X5667 + 1x C2070 2x X5667 + 2x C2070

Scale

d P

erf

orm

an

ce /

Pri

ce

Daresbury Crambin 19.6k Atom

Page 65: Updated: December 21, 2012 - nvidia.co.uk

Greener Science with NVIDIA

Running CHARMM release C37b1

The blue nodes contains 64 X5667 CPUs

(95W, 4 Cores per CPU, $1000).

The green nodes contain 2 X5667 CPUs and 1

or 2 NVIDIA C2070 GPUs (238W, $1000 each).

Using GPUs will decrease energy use by 75%

0

2000

4000

6000

8000

10000

12000

14000

16000

18000

64x X5667 2x X5667 + 1x C2070 2x X5667 + 2x C2070

En

erg

y E

xp

en

ded

(kJ)

Energy Used in Simulating 1 ns Daresbury G1nBP 61.2k Atoms

Lower is better Energy Expended

= Power x Time

Page 66: Updated: December 21, 2012 - nvidia.co.uk

www.acellera.com

470 ns/day on 1 GPU for L-Iduronic acid (1362 atoms)

116 ns/day on 1 GPU for DHFR (23K atoms)

M. Harvey, G. Giupponi and G. De Fabritiis, ACEMD: Accelerated molecular dynamics simulations in the microseconds timescale, J. Chem. Theory and Comput. 5, 1632 (2009)

Page 67: Updated: December 21, 2012 - nvidia.co.uk

www.acellera.com

NVT, NPT, PME, TCL, PLUMED, CAMSHIFT1

1 M. J. Harvey and G. De Fabritiis, An implementation of the smooth particle-mesh Ewald (PME) method on GPU hardware, J. Chem. Theory Comput., 5, 2371–2377 (2009) 2 For a list of selected references see http://www.acellera.com/acemd/publications

Page 68: Updated: December 21, 2012 - nvidia.co.uk
Page 69: Updated: December 21, 2012 - nvidia.co.uk

Application Features Supported GPU Perf Release Status Notes

Abinit

Local Hamiltonian, non-local

Hamiltonian, LOBPCG algorithm,

diagonalization /

orthogonalization

1.3-2.7X Released since Version 6.12

Multi-GPU support

www.abinit.org

ACES III Integrating scheduling GPU into

SIAL programming language and

SIP runtime environment

10X on kernels Under development

Multi-GPU support

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/deumens_ESaccel_2012.pdf

ADF

Fock Matrix, Hessians

TBD

Pilot project completed,

Under development

Multi-GPU support

www.scm.com

BigDFT DFT; Daubechies wavelets,

part of Abinit

5-25X

(1 CPU core to

GPU kernel)

Released June 2009,

current release 1.6

Multi-GPU support

http://inac.cea.fr/L_Sim/BigDFT/news.html,

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/BigDFT-Formalism.pdf and

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/BigDFT-HPC-tues.pdf

Casino

TBD

TBD

Under development,

Spring 2013 release

Multi-GPU support

http://www.tcm.phy.cam.ac.uk/~mdt26/casino.

html

CP2K DBCSR (spare matrix multiply

library) 2-7X

Under development

Multi-GPU support

http://www.olcf.ornl.gov/wp-

content/training/ascc_2012/friday/ACSS_2012_V

andeVondele_s.pdf

GAMESS-US Libqc with Rys Quadrature

Algorithm, Hartree-Fock, MP2

and CCSD in Q4 2012

1.3-1.6X,

2.3-2.9x HF

Released

Multi-GPU support Next release Q4 2012.

http://www.msg.ameslab.gov/gamess/index.html

Quantum Chemistry Applications

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 70: Updated: December 21, 2012 - nvidia.co.uk

Application Features Supported GPU Perf Release Status Notes

GAMESS-UK

(ss|ss) type integrals within

calculations using Hartree Fock ab

initio methods and density

functional theory. Supports

organics & inorganics.

8x Release in Summer 2012

Multi-GPU support

http://www.ncbi.nlm.nih.gov/pubmed/215419

63

Gaussian Joint PGI, NVIDIA & Gaussian

Collaboration TBD

Under development

Multi-GPU support

Announced Aug. 29, 2011

http://www.gaussian.com/g_press/nvidia_press.htm

GPAW

Electrostatic poisson equation,

orthonormalizing of vectors,

residual minimization method

(rmm-diis)

8x

Released

Multi-GPU support

https://wiki.fysik.dtu.dk/gpaw/devel/projects/gpu.html,

Samuli Hakala (CSC Finland) & Chris O’Grady (SLAC)

Jaguar Investigating GPU acceleration TBD

Under development

Multi-GPU support

Schrodinger, Inc.

http://www.schrodinger.com/kb/278

LSMS Generalized Wang-Landau method

3x

with 32 GPUs vs.

32 (16-core)

CPUs

Under development

Multi-GPU support

NICS Electronic Structure Determination Workshop 2012:

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/Eisenbach_OakRidge_February.pdf

MOLCAS CU_BLAS support 1.1x

Released, Version 7.8

Single GPU. Additional GPU

support coming in Version 8

www.molcas.org

MOLPRO Density-fitted MP2 (DF-MP2),

density fitted local correlation

methods (DF-RHF, DF-KS), DFT

1.7-2.3X

projected

Under development

Multiple GPU

www.molpro.net

Hans-Joachim Werner GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Quantum Chemistry Applications

Page 71: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported GPU Perf Release Status Notes

MOPAC2009 pseudodiagonalization, full

diagonalization, and density

matrix assembling

3.8-14X Under Development

Single GPU

Academic port.

http://openmopac.net

NWChem Triples part of Reg-CCSD(T),

CCSD & EOMCCSD task

schedulers

3-10X projected Release targeting end of 2012

Multiple GPUs

Development GPGPU benchmarks:

www.nwchem-sw.org

And http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/Krishnamoorthy-ESCMA12.pdf

Octopus DFT and TDDFT TBD Released http://www.tddft.org/programs/octopus/

PEtot Density functional theory (DFT)

plane wave pseudopotential

calculations

6-10X Released

Multi-GPU

First principles materials code that computes

the behavior of the electron structures of

materials

Q-CHEM RI-MP2 8x-14x Released, Version 4.0 http://www.q-

chem.com/doc_for_web/qchem_manual_4.0.pdf

Quantum Chemistry Applications

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 72: Updated: December 21, 2012 - nvidia.co.uk

Application Features

Supported GPU Perf Release Status Notes

QMCPACK Main features 3-4x Released

Multiple GPUs

NCSA

University of Illinois at Urbana-Champaign

http://cms.mcc.uiuc.edu/qmcpack/index.php

/GPU_version_of_QMCPACK

Quantum

Espresso/PWscf

PWscf package: linear algebra

(matrix multiply), explicit

computational kernels, 3D FFTs

2.5-3.5x

Released

Version 5.0

Multiple GPUs

Created by Irish Centre for

High-End Computing

http://www.quantum-espresso.org/index.php

and http://www.quantum-espresso.org/

TeraChem “Full GPU-based solution”

44-650X vs.

GAMESS CPU

version

Released

Version 1.5

Multi-GPU/single node

Completely redesigned to

exploit GPU parallelism. YouTube:

http://youtu.be/EJODzk6RFxE?hd=1 and

http://www.olcf.ornl.gov/wp-

content/training/electronic-structure-

2012/Luehr-ESCMA.pdf

VASP Hybrid Hartree-Fock DFT

functionals including exact

exchange

2x

2 GPUs

comparable to

128 CPU cores

Available on request

Multiple GPUs By Carnegie Mellon University

http://arxiv.org/pdf/1111.0716.pdf

Quantum Chemistry Applications

GPU Perf compared against Multi-core x86 CPU socket.

GPU Perf benchmarked on GPU supported features

and may be a kernel to kernel perf comparison

Page 73: Updated: December 21, 2012 - nvidia.co.uk

BigDFT

Page 74: Updated: December 21, 2012 - nvidia.co.uk
Page 75: Updated: December 21, 2012 - nvidia.co.uk
Page 76: Updated: December 21, 2012 - nvidia.co.uk
Page 77: Updated: December 21, 2012 - nvidia.co.uk

CP2K

Page 78: Updated: December 21, 2012 - nvidia.co.uk

Kepler, it’s faster

0

2

4

6

8

10

12

14

CPU Only CPU + K10 CPU + K20 CPU + K20X CPU + 2x K10 CPU + 2x K20 CPU + 2x K20X

Perf

orm

an

ce R

ela

tiv

e t

o C

PU

On

ly

Running CP2K version 12413-trunk on CUDA

5.0.36

The blue node contains 2 E5-2687W CPUs

(150W, 8 Cores per CPU, $1500 each).

The green nodes contain 2 E5-2687W CPUs

and 1 or 2 NVIDIA K10, K20, or K20X GPUs

(235W each).

Using GPUs delivers up to 12.6x the performance per node

Page 79: Updated: December 21, 2012 - nvidia.co.uk

Strong Scaling

Speedups increase as more nodes are added, up to 3x at 768 nodes

Conducted on Cray XK6

Using matrix-matrix multiplication

NREP=6 and N=159,000 with 50% occupation

0

1

2

3

4

5

6

7

8

256 512 768

Sp

eed

up

Rela

tiv

e t

o 2

56

no

n-G

PU

Co

res

# of Cores used

XK6 With GPUs

XK6 Without GPUs

3x

2.9x

2.3x

Page 80: Updated: December 21, 2012 - nvidia.co.uk

Kepler, keeping the planet Green

Running CP2K version 12413-trunk on CUDA

5.0.36

The blue node contains 2 E5-2687W CPUs

(150W, 8 Cores per CPU, $1500 each).

The green nodes contain 2 E5-2687W CPUs

and 1 or 2 NVIDIA K20 GPUs (235W each).

Using K20s will lower energy use by over 75% for the same simulation

0

50

100

150

200

250

300

350

CPU Only CPU + K20 CPU + 2x K20

En

erg

y E

xp

en

ded

(kJ)

Lower is better Energy Expended

= Power x Time

Page 81: Updated: December 21, 2012 - nvidia.co.uk

GAUSSIAN

Page 82: Updated: December 21, 2012 - nvidia.co.uk

NVIDIA Confidential

Gaussian

Key quantum chemistry code ACS Fall 2011 press release

Joint collaboration between Gaussian, NVDA and PGI for GPU acceleration: http://www.gaussian.com/g_press/nvidia_press.htm No such release exists for Intel MIC or AMD GPUs

Mike Frisch quote: “Calculations using Gaussian are limited primarily by the available computing

resources,” said Dr. Michael Frisch, president of Gaussian, Inc. “By coordinating the

development of hardware, compiler technology and application software among the

three companies, the new application will bring the speed and cost-effectiveness of

GPUs to the challenging problems and applications that Gaussian’s customers need to

address.”

Page 83: Updated: December 21, 2012 - nvidia.co.uk

GAMESS

Page 84: Updated: December 21, 2012 - nvidia.co.uk

84

We like to push the envelope as much as we can in the direction of highly scalable efficient codes. GPU technology seems like a good way to achieve this goal. Also, since we are associated with a DOE Laboratory, energy efficiency is important, and this is another reason to explore quantum chemistry on GPUs.

Prof. Mark Gordon Distinguished Professor, Department of Chemistry, Iowa State University and

Director, Applied Mathematical Sciences Program, AMES Laboratory

GAMESS Partnership Overview Mark Gordon and Andrey Asadchev, key developers of GAMESS,

in collaboration with NVIDIA. Mark Gordon is a recipient of a

NVIDIA Professor Partnership Award.

Quantum Chemistry one of major consumers of CPU cycles at

national supercomputer centers

NVIDIA developer resources fully allocated to GAMESS code

Page 85: Updated: December 21, 2012 - nvidia.co.uk

GAMESS August 2011 GPU Performance

First GPU supported GAMESS release via "libqc", a library for fast quantum

chemistry on multiple NVIDIA GPUs in multiple nodes, with CUDA software

2e- AO integrals and their assembly into a closed shell Fock matrix

0.0

1.0

2.0

Ginkgolide (53 atoms) Vancomycin (176 atoms)

GA

ME

SS

Au

g.

20

11

Re

lea

se

Re

lati

ve

P

erf

orm

an

ce

fo

r T

wo

Sm

all M

ole

cu

les

4x E5640 CPUs

4x E5640 CPUs + 4x Tesla C2070s

Page 86: Updated: December 21, 2012 - nvidia.co.uk

86

Upcoming GAMESS Q4 2012 Release

Multi-nodes with multi-GPUs supported

Rys Quadrature

Hartree-Fock 8 CPU cores: 8 CPU cores + M2070 yields 2.3-2.9x speedup.

See 2012 publication

Møller–Plesset perturbation theory (MP2):

Preliminary code completed Paper in development

Coupled Cluster SD(T): CCSD code completed,

(T) in progress

Page 87: Updated: December 21, 2012 - nvidia.co.uk

87 NVIDIA CONFIDENTIAL

GAMESS - New Multithreaded Hybrid CPU/GPU Approach to H-F

2.3

2.5 2.5

2.3 2.4

2.3

2.9

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

Taxol 6-31G Taxol 6-31G(d) Taxol 6-31G(2d,2p)

Taxol 6-31++G(d,p)

Valinomycin 6-31G

Valinomycin 6-31G(d)

Valinomycin 6-31G(2d,2p)

Hartree-Fock GPU Speedups*

Speedup

Adding 1x 2070 GPU

speeds up computations

by 2.3x to 2.9x

* A. Asadchev, M.S. Gordon, “New

Multithreaded Hybrid CPU/GPU Approach to

Hartree-Fock,” Journal of Chemical Theory and

Computation (2012)

Page 88: Updated: December 21, 2012 - nvidia.co.uk

GPAW

Page 89: Updated: December 21, 2012 - nvidia.co.uk

Used with

permission

from Samuli

Hakala

Page 90: Updated: December 21, 2012 - nvidia.co.uk
Page 91: Updated: December 21, 2012 - nvidia.co.uk
Page 92: Updated: December 21, 2012 - nvidia.co.uk

92

Page 93: Updated: December 21, 2012 - nvidia.co.uk

93

Page 94: Updated: December 21, 2012 - nvidia.co.uk

94

Page 95: Updated: December 21, 2012 - nvidia.co.uk

95

Page 96: Updated: December 21, 2012 - nvidia.co.uk

96

Page 97: Updated: December 21, 2012 - nvidia.co.uk

97

Page 98: Updated: December 21, 2012 - nvidia.co.uk

Quantum Espress/PWscf

Page 99: Updated: December 21, 2012 - nvidia.co.uk

Kepler, fast science

Running Quantum Espresso version 5.0-build7

on CUDA 5.0.36

The blue node contains 2 E5-2687W CPUs

(150W, 8 Cores per CPU, $1500 each).

The green nodes contain 2 E5-2687W CPUs

and 1 or 2 NVIDIA M2090 or K10 GPUs (225W

and 235W respectively).

Using K10s delivers up to 11.7x the performance per node over CPUs And 1.7x the performance when compared to M2090s

0

2

4

6

8

10

12

14

CPU Only CPU + M2090 CPU + K10 CPU + 2x M2090 CPU + 2x K10

Perf

orm

an

ce R

ela

tiv

e t

o C

PU

On

ly

AUsurf

Page 100: Updated: December 21, 2012 - nvidia.co.uk

Extreme Performance/Price from 1 GPU

Adding a GPU can improve performance by 3.7x while only increasing price by 25%

CPU

Only

Simulations run on FERMI @ ICHEC.

A 6-Core 2.66 GHz Intel X5650 was

used for the CPU

An NVIDIA C2050 was used for the

GPU

0

0.5

1

1.5

2

2.5

3

3.5

4

Price: Performance: (Shilu-3) Performance: (Water-on-Calcite)

Scale

d t

o C

PU

On

ly

CPU+

GPU

Calcite structure

Page 101: Updated: December 21, 2012 - nvidia.co.uk

Extreme Performance/Price from 1 GPU

Adding a GPU can improve performance by 3.5x while only increasing price by 25%

CPU

Only

Simulations run on FERMI @ ICHEC.

A 6-Core 2.66 GHz Intel X5650 was

used for the CPU

An NVIDIA C2050 was used for the

GPU

0

0.5

1

1.5

2

2.5

3

3.5

4

Price: Performance: (AUSURF112, k-point)

Performance: (AUSURF112,gamma-point)

CPU+

GPU

Calculation done for a gold surface of 112 atoms

Price and Performance scaled to the CPU only system

Page 102: Updated: December 21, 2012 - nvidia.co.uk

Replace 72 CPUs with 8 GPUs

223 219

0

50

100

150

200

250

120 CPUs ($42,000) 48 CPUs + 8 GPUs ($32,800)

Ela

psed

Tim

e (

min

ute

s)

LSMO-BFO (120 Atoms) 8 K-points

Simulations run on PLX @ CINECA.

Intel 6-Core 2.66 GHz X5550 were

used for the CPUs

NVIDIA M2070s were used for the

GPUs

The GPU Accelerated setup performs faster and costs 24% less

Page 103: Updated: December 21, 2012 - nvidia.co.uk

QE/PWscf - Green Science

Simulations run on PLX @ CINECA.

Intel 6-Core 2.66 GHz X5550 were

used for the CPUs

NVIDIA M2070s were used for the

GPUs

Over a year, the lower power consumption would save $4300 on energy bills

0

2000

4000

6000

8000

10000

12000

120 CPUs ($42,000) 48 CPUs + 8 GPUs ($32,800)

Po

wer

Co

nsu

mp

tio

n (

Watt

s)

LSMO-BFO (120 Atoms) 8 K-points

Lower is better

Page 104: Updated: December 21, 2012 - nvidia.co.uk

NVIDIA GPUs Use Less Energy

0

0.1

0.2

0.3

0.4

0.5

0.6

Shilu-3 AUSURF112 Water-on-Calcite

Po

wer

Co

nsu

mp

tio

n [

kW

/h]

CPU+GPU

CPU Only

Lower is better

-57% -54%

-58%

Energy Consumption on Different Tests Simulations run on FERMI @ ICHEC.

A 6-Core 2.66 GHz Intel X5650 was

used for the CPU

An NVIDIA C2050 was used for the

GPU

In all tests, the GPU Accelerated system consumed less than half the power as the CPU Only

Page 105: Updated: December 21, 2012 - nvidia.co.uk

QE/PWscf - Great Strong Scaling in Parallel

0

5000

10000

15000

20000

25000

30000

35000

2 (16) 4 (32) 6 (48) 8 (64) 10 (80) 12 (96) 14 (112)

Tim

e (

s)

Nodes (Total CPU Cores)

CdSe-159 Walltime of 1 full SCF

CPU

CPU+GPU

2.5x

Simulations run on STONEY @ ICHEC.

Two quad core 2.87 GHz Intel X5560s

were used in each node

Two NVIDIA M2090s were used in

each node for the CPU+GPU test

Speedups up to 2.5x with GPU Accelerations

2.2x

159 Cadmium Selenide nanodots

2.1x 2.2x

Lower is better

Page 106: Updated: December 21, 2012 - nvidia.co.uk

QE/PWscf - More Powerful Strong Scaling

0

500

1000

1500

2000

2500

3000

3500

4000

4500

4(48) 8(96) 12(144) 16(192) 24(288) 32(384) 44(528)

Tim

e (

s)

Nodes (Total CPU Cores)

GeSnTe134 Walltime of full SCF

CPU

CPU+GPU

Simulations run on PLX @ CINECA.

Two 6-Core 2.4 GHz Intel E5645s were

used in each node

Two NVIDIA M2070s were used in

each node for the CPU+GPU test 1.6x

2.1x

Accelerate your cluster by up to 2.1x with NVIDIA GPUs

2.3x

2.4x

Lower is better

Page 107: Updated: December 21, 2012 - nvidia.co.uk

TeraChem

Page 108: Updated: December 21, 2012 - nvidia.co.uk

TeraChem Supercomputer Speeds on GPUs

0

10

20

30

40

50

60

70

80

90

100

4096 Quad Core CPUs ($19,000,000) 8 C2050 ($31,000)

Tim

e (

Seco

nd

s)

Time for SCF Step

TeraChem running on 8 C2050s on 1 node

NWChem running on 4096 Quad Core CPUs

In the Chinook Supercomputer

Giant Fullerene C240 Molecule

Similar performance from just a handful of GPUs

Page 109: Updated: December 21, 2012 - nvidia.co.uk

TeraChem Bang for the Buck

1

493

0

100

200

300

400

500

600

4096 Quad Core CPUs ($19,000,000) 8 C2050 ($31,000)

Pri

ce/P

erf

orm

an

ce r

ela

tiv

e t

o S

up

erc

om

pu

ter

Performance/Price

Dollars spent on GPUs do 500x more science than those spent on CPUs

TeraChem running on 8 C2050s on 1 node

NWChem running on 4096 Quad Core CPUs

In the Chinook Supercomputer

Giant Fullerene C240 Molecule

Page 110: Updated: December 21, 2012 - nvidia.co.uk

Kepler’s Even Better

Kepler performs 2x faster than Tesla

TeraChem running on C2050 and K20C

First graph is of BLYP/G-31(d)

Second is B3LYP/6-31G(d)

0

100

200

300

400

500

600

700

800

C2050 K20C

Seco

nd

s

Olestra BLYP 453 Atoms

0

200

400

600

800

1000

1200

1400

1600

1800

2000

C2050 K20C

Seco

nd

s

B3LYP/6-31G(d)

Page 111: Updated: December 21, 2012 - nvidia.co.uk

11

1

Benefits of GPU Accelerated Computing

Faster than CPU only systems in all tests

Large performance boost with marginal price increase

Energy usage cut by more than half

GPUs scale well within a node and over multiple nodes

K20 GPU is our fastest and lowest power high performance GPU yet

Try GPU accelerated AMBER for free – www.nvidia.com/GPUTestDrive

Page 112: Updated: December 21, 2012 - nvidia.co.uk

11

2

GPU Test Drive Experience GPU Acceleration

Free & Easy – Sign up, Log in and

See Results

Preconfigured with Molecular

Dynamics Apps

www.nvidia.com/gputestdrive

Remotely Hosted GPU Servers

For Computational Chemistry

Researchers, Biophysicists