parallelization of the fv3 dycore for gpu and mic processors · parallelization of the fv3 dycore...

17
Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang Yu, Daniel Fiorino, Lynd Stringer

Upload: others

Post on 31-Aug-2019

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Parallelization of the FV3 dycorefor GPU and MIC processors

Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang Yu, Daniel

Fiorino, Lynd Stringer

Page 2: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Outline

• Transition from NIM to FV3

• Performance Portability

– NIM performance & scaling

– FV3 parallelization

Page 3: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

NIM Achievements

• Designed for fine-grain parallel– Icosahedral uniform grid– Lookup table to access neighbor cells

• Performance portable with a single source code– OpenACC for GPU– OpenMP for CPU, MIC– F2C-ACC compiler improved OpenACC compilers

• Enabled fair comparison between CPU, GPU & MIC– Single source code– Bitwise exact between CPU, GPU & MIC– Optimized for all architectures– Same generation hardware, standard chips

• Benchmark code for NOAA fine-grain procurement

Page 4: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Device Performance

49.8

26.8

20

14.312

23.6

15.1 13.9

7.84.8

16

6.1

0

10

20

30

40

50

60

2010/11 2012 2013 2014 2016

run

tim

e (

sec)

CPU GPU MIC

NIM dynamics240 KM resolution

Device

Page 5: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Cray: CS-Storm Node• Up to 8 GPUs / node• QPI between CPU sockets• Single Infiniband link• GPU direct: P2P or RDMA

FDR: 40 Gb/secQDR: 54 Gb /secEDR: 100 Gb /sec

Page 6: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Weak Scaling, CS Storm

• NIM 30 KM resolution

• 40 Pascal P100s, 2 – 8 GPUs / node

• GPUdirect = false

7.5 7.5 7.6 7.5

4.9 4.9 6.1 7.9

0

5

10

15

20

2 4 5 8

Ru

nti

me

(se

c)

GPUs / node

Communications

Computation

Num Nodes 20 10 8 5

Page 7: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Strong Scaling, CS Storm• 5 – 40 CPU nodes, • 2 Pascal GPUs/ per node• GPUdirect = false

28.7

14.67.5

3.9

9.5

6.8

4.9

3.6

0

5

10

15

20

25

30

35

40

45

10 20 40 80

Ru

nti

me

(se

c)

Number of GPUs:

Strong ScalingNIM – 30 KM Resolution

NVIDIA Pascal

Communications

Compute

Page 8: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

MPI Task5

Spiral Grid Optimization: NIM• Eliminate MPI message packing / unpacking by

ordering grid points • Gave a 35% speedup in dynamics runtime using 16 MPI

tasks / GPU (Middlecoff, 2015)

Interior PointsMPI Task 5

Interior PointsMPI Task 6

Interior PointsMPI Task 4

MPI Task 2

Interior Points Halo Points (received)

Task4 Task6Task8 Task3MPI Task5

MP

I Rec

eive

MP

I Receive

MP

I Sen

d

Data Storage Layout for MPI Task 5

Halo Points (received)

MP

I Sen

d

Page 9: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

From NIM to FV3Cube Sphere (FV3)

• 6 faces, 1 MPI rank per face

• Edge and corner points

• Direct access: i-j-k ordering

Icosahedral (NIM)

• Uniform Grid

• No special grid points

• Indirect access: k–I ordering

Graphic courtesy of Peter Lauritzen (NCAR)

Page 10: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Fine-grain Parallelization of FV3• Early work by NVIDIA demonstrated poor

performance with original code

• Goal is to adapt the FV3 to run on GPU, MIC– Expose sufficient parallelism

– Minimize code changes

– Maintain single source code

• Achieve performance portability– OpenMP for CPU, MIC

– OpenACC for GPU

• Bitwise exact results between CPU, GPU, MIC

Page 11: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

GPU Parallelization

• C_SW (shallow water)

– Push “k” loop into routines to expose more parallelism for GPU

• Little benefit for MIC, CPU,

– Two test cases built

• I – J – K ordering

• K – I – J ordering

– Optimizations for CPU, GPU, MIC

• Evaluation of results

• Performance benefit versus impact to code

Page 12: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

C_SW Performance • 2013 IvyBridge, 2013 Kepler K40, 2016 KNL

• Execution time for a single call to C_SW

45 4229

15

67

18

137

4

3

10

3

20

21

9

3

32

50

0

20

40

60

80

100

120

CPU baseline CPU i-j-k GPU i-j-k KNL i-j-k CPU k-i-j GPU k-i-j

Ru

n t

ime

(m

s)

c_sw-partial divergence_corner d2a2c_vect

C_SW Call Tree- divergence_corner- d2a2_vect- c_sw_partial

Page 13: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

C_SW Conclusions - Part 1

• K-I-J variant involves lots of changes with little performance benefit

– c_sw_partial, divergence k-i-j gave 1.5X benefit over i-j-k, warrants further investigation

• I-J-K variant resulted in a small improvement in CPU performance

– Few changes to the code

• Promote 2D arrays to 3D

• Add K loop

• Minor changes to OpenMP directives

Page 14: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

• dyn_core (100%)– c_sw (13%)

• d2a2_vect• divergence_corner

– update_dz_c (2%)– riem_solver_c (14%)– d_sw (38%)

• FV_TP_2D (37%)– copy_corners (0.1%)– xppm0 (14%)– yppm0 (14%)

• xtp_v• xtp_u

– update_dz_d (10%)• FV_TP_2D (37%)

– riem_solver3 (1%)– pg_d (5%)

• nh_p_grad (5%)

– tracer_2d (6%)– remapping (6%)

FV3 dynamics

Notes• Model configured for non-

hydrostatic, non-nested, with 10 tracers

• Runtime percentages are for Haswell CPU

• Percentages represent aggregate values

• Remapping is done once every 10 timesteps

• Current efforts are shaded

Page 15: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

momo

module m1 !Poor Performance, not enough shared memoryinteger, parameter :: isd = -2, ied = 195, jsd = -2, jed = 195integer, parameter :: is = 1, ie = 192, js = 1, je = 192integer, parameter :: npz = 128

containssubroutine s1(a)

real,intent(INOUT) :: delpc( isd:ied, jsd:jed, npz) delpc,… ! global arraysreal, dimension( is-1:ie+1, js-1:je+2 ) :: fy, fy1, fy2,fx, fx1,fx2 ! local arrays

!$acc kernels!$acc loop private(fx,fx1,fx2,fy,fy1,fy2)

do k = 1, npz ! gang loop!$acc loop collapse(2)

do j=js-1,je+2 ! vector loopdo i=is-1,ie+1 ! vector loop

fy1(i,j) = delp(i,j-1,k); fy(i,j) = pt(i,j-1,k); fy2(i,j) = w(i,j-1,k)enddo

enddo

! additional calculations with fx,fx1,fx2, handling of corner points! dependencies on fx1, fy1, ect require synchronization here

do j=js-1,je+1do i=is-1,ie+1

delpc(i,j,k) = delp(i,j,k) + (fx1(i,j) - fx1(i+1,j) + fy1(i,j) - fy1(i,j+1)) * rarea(i,j)

Page 16: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Tiling / Cache Blocking• Increase utilization of GPU shared / cache memory

– 48 / 16KB per multiprocessor

• Increased complexity of code– Add chunk loops, indexing, etc

• Gave 3X performance boost for simple test case– Testing in c_sw

do j=1, 192 ! worker loopdo i=1,192 ! vector loop

do j=1, 192, jchunk !chunk loopdo i=1,192, ichunk !chunk loop

do jx = 1, jchunk !tile loopdo ix = 1, ichunk !tile loop

i = ic + ix – 1; j = jc +jx -1

Page 17: Parallelization of the FV3 dycore for GPU and MIC processors · Parallelization of the FV3 dycore for GPU and MIC processors Mark Govett, Jim Rosinski, Jacques Middlecoff, Yonggang

Conclusion• NIM work has ended

– Using NIM for performance & scaling• Testing on KNL, Pascal chips

– Apply knowledge toward FV3• Serial, parallel performance, portability

• FV3 parallelization is going fairly well– Goal is single source code, performance

portability on CPU, GPU & MIC

– Modifying code to improve performance• Push “k” loop into subroutines

• OpenACC parallelization using PGI compiler

• Exploring optimizations including tiling