introduction to parallel programming for multicore/manycore...

62
Introduction to Parallel Programming for Multicore/Manycore Clusters Introduction Kengo Nakajima Information Technology Center The University of Tokyo http://nkl.cc.u-tokyo.ac.jp/18s/

Upload: others

Post on 06-Mar-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Introduction to Parallel Programming for

Multicore/Manycore Clusters

Introduction

Kengo NakajimaInformation Technology Center

The University of Tokyo

http://nkl.cc.u-tokyo.ac.jp/18s/

Page 2: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Descriptions of the Class• Technical & Scientific Computing I (4820-1027)

– 科学技術計算Ⅰ

– Department of Mathematical Informatics

• Special Lecture on Computational Science I(4810-1215)– 計算科学アライアンス特別講義Ⅰ

– Department of Computer Science

• Multithreaded Parallel Computing (3747-110) (from 2016)– スレッド並列コンピューティング– Department of Electrical Engineering & Information Systems

• This class is certificated as the category “F" lecture of "the Computational Alliance, the University of Tokyo"

2

Page 3: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

• 2009-2014– Introduction to FEM Programming

• FEM: Finite-Element Method:有限要素法

– Summer (I) : FEM Programming for Solid Mechanics– Winter (II): Parallel FEM using MPI

• The 1st part (summer) is essential for the 2nd part (winter)

• Problems– Many new (international) students in Winter, who did not take the

1st part in Summer– They are generally more diligent than Japanese stud ents

• 2015 – Summer (I) : Multicore programming by OpenMP– Winter (II): FEM + Parallel FEM by MPI/OpenMP for Heat Transfer

• Part I & II are independent (maybe...)

• 2017– Lectures are given in English– Reedbush-U Supercomputer System

3

Page 4: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

4

Motivation for Parallel Computing(and this class)

• Large-scale parallel computer enables fast computing in large-scale scientific simulations with detailed models. Computational science develops new frontiers of science and engineering.

• Why parallel computing ?– faster & larger– “larger” is more important from the view point of “new frontiers of

science & engineering”, but “faster” is also important.– + more complicated– Ideal: Scalable

• Solving Nx scale problem using Nx computational resources during same computation time (weak scaling)

• Solving a fix-sized problem using Nx computational resources in 1/N computation time (strong scaling)

Intro

Page 5: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Scientific Computing = SMASH

• You have to learn many things.• Collaboration (or Co-Design) will be

important for future career of each of you, as a scientist and/or an engineer.– You have to communicate with people

with different backgrounds.– It is more difficult than communicating

with foreign scientists from same area.

• (Q): Computer Science, Computational Science, or Numerical Algorithms ?

5Intro

Science

Modeling

Algorithm

Software

Hardware

Page 6: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

This Class ...

• Target: Parallel FVM (Finite -Volume Method) using OpenMP

• Science: 3D Poisson Equations• Modeling: FVM• Algorithm: Iterative Solvers etc.

• You have to know many components to learn FVM, although you have already learned each of these in undergraduate and high-school classes.

6Intro

Science

Modeling

Algorithm

Software

Hardware

Page 7: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Road to Programming for “Parallel” Scientific Computing

7Intro

Unix, Fortan, C etc.

Programming for FundamentalNumerical Analysis

(e.g. Gauss-Seidel, RK etc.)

Programming for Real World Scientific Computing

(e.g. FEM, FDM)

Programming for Parallel Scientific Computing

(e.g. Parallel FEM/FDM)

Big gap here !!

Page 8: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Intro

The third step is important !

• How to parallelize applications ?– How to extract parallelism ?– If you understand methods, algorithms,

and implementations of the original code, it’s easy.

– “Data-structure” is important

• How to understand the code ?– Reading the application code !!– It seems primitive, but very effective.– In this class, “reading the source code” is encouraged.– 3: FVM, 4: Parallel FVM

8

1. Unix, Fortan, C etc.

2. Programming for FundamentalNumerical Analysis

(e.g. Gauss-Seidel, RK etc.)

3. Programming for Real World Scientific Computing

(e.g. FEM, FDM)

4. Programming for Parallel Scientific Computing

(e.g. Parallel FEM/FDM)

Page 9: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Kengo Nakajima 中島研吾 (1/2)• Current Position

– Professor, Supercomputing Research Division, Information Technology Center, The University of Tokyo(情報基盤センター)

• Department of Mathematical Informatics, Graduate School of Information Science & Engineering, The University of Tokyo(情報理工・数理情報学)

• Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo(工・電気系工学)

– Deputy Director, RIKEN CCS (Center for Computational Science) (Kobe) (20%)

• Research Interest– High-Performance Computing– Parallel Numerical Linear Algebra (Preconditioning)– Parallel Programming Model– Computational Mechanics, Computational Fluid Dynamics– Adaptive Mesh Refinement, Parallel Visualization

9

Page 10: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Kengo Nakajima (2/2)• Education

– B.Eng (Aeronautics, The University of Tokyo, 1985)– M.S. (Aerospace Engineering, University of Texas, 1993)– Ph.D. (Quantum Engineering & System Sciences, The University

of Tokyo, 2003)

• Professional– Mitsubishi Research Institute, Inc. (1985-1999)– Research Organization for Information Science & Technology

(1999-2004)– The University of Tokyo

• Department Earth & Planetary Science (2004-2008)• Information Technology Center (2008-)

– JAMSTEC (2008-2011), part-time– RIKEN (2009-2016), part-time

10

Page 11: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

• Supercomputers and Computational Science• Overview of the Class• Future Issues

11Intro

Page 12: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Computer & CPU

• Central Processing Unit (中央処理装置):CPU• CPU’s used in PC and Supercomputers are based on

same architecture• GHz: Clock Rate

– Frequency: Number of operations by CPU per second• GHz -> 109 operations/sec

– Simultaneous 4-8 instructions per clock

12Intro

Page 13: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Multicore CPU13

CPU

コア(Core)

CPUCore

Core

CPUCore

Core

Core

Core

Single Core1 cores/CPU

Dual Core2 cores/CPU

Quad Core4 cores/CPU

• GPU: Manycore– O(101)-O(102) cores

• More and more cores– Parallel computing

• Core= Central part of CPU

• Multicore CPU’s with 4-8 cores are popular– Low Power

Intro

• Reedbush-U: 18 cores x 2– Intel Xeon Broadwell-EP

Page 14: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

GPU/Manycores• GPU:Graphic Processing Unit

– GPGPU: General Purpose GPU– O(102) cores– High Memory Bandwidth– Cheap– NO stand-alone operations

• Host CPU needed

– Programming: CUDA, OpenACC

• Intel Xeon/Phi: Manycore CPU– 60+ cores– High Memory Bandwidth– Unix, Fortran, C compiler – Host CPU needed in the 1st generation

• Stand-alone is possible now (Knights Landing, KNL)

14Intro

Page 15: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Parallel SupercomputersMulticore CPU’s are connected through network

15

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

Intro

Page 16: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Supercomputers with Heterogeneous/Hybrid Nodes

16

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

Intro

Page 17: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

17

Performance of Supercomputers• Performance of CPU: Clock Rate• FLOPS (Floating Point Operations per Second)

– Real Number

• Recent Multicore CPU– 4-8 FLOPS per Clock– (e.g.) Peak performance of a core with 3GHz

• 3×109×4(or 8)=12(or 24)×109 FLOPS=12(or 24)GFLOPS

• 106 FLOPS= 1 Mega FLOPS = 1 MFLOPS• 109 FLOPS= 1 Giga FLOPS = 1 GFLOPS• 1012 FLOPS= 1 Tera FLOPS = 1 TFLOPS• 1015 FLOPS= 1 Peta FLOPS = 1 PFLOPS• 1018 FLOPS= 1 Exa FLOPS = 1 EFLOPS

Intro

Page 18: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

18

Peak Performance of Reedbush -UIntel Xeon E5 -2695 v4 (Broadwell-EP)

• 2.1 GHz– 16 DP (Double Precision) FLOP operations per Clock

• Peak Performance (1 core)– 2.1×16= 33.6 GFLOPS

• Peak Performance– 1-Socket, 18 cores: 604.8 GFLOPS

– 2-Sockets, 36 cores: 1,209.6 GFLOPS 1-Node

Intro

Page 19: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

19

TOP 500 Listhttp://www.top500.org/

• Ranking list of supercomputers in the world• Performance (FLOPS rate) is measured by

“Linpack” which solves large-scale linear equations.– Since 1993– Updated twice a year (International Conferences in

June and November)

• Linpack– iPhone version is available

Intro

Page 20: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

20

• PFLOPS: Peta (=1015) Floating OPerations per Sec.• Exa-FLOPS (=1018) will be attained after 2020

Intro

http://www.top500.org/

Page 21: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Benchmarks• TOP 500 (Linpack,HPL(High Performance Linpack))

– Direct Linear Solvers, FLOPS rate

– Regular Dense Matrices, Continuous Memory Access

– Computing Performance

• HPCG– Preconditioned Iterative Solvers, FLOPS rate

– Irregular Sparse Matrices derived from FEM Applications with Many “0” Components

• Irregular/Random Memory Access,

• Closer to “Real” Applications than HPL

– Performance of Memory, Communications

• Green 500– FLOPS/W rate for HPL (TOP500)

21

Page 22: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

22http://www.top500.org/

Site Computer/Year Vendor CoresRmax

(TFLOPS)Rpeak

(TFLOPS)Power(kW)

1National Supercomputing Center in Wuxi, China

Sunway TaihuLight , Sunway MPP, Sunway SW26010 260C 1.45GHz, 2016 NRCPC

10,649,60093,015

(= 93.0 PF)125,436 15,371

2 National Supercomputing Center in Tianjin, China

Tianhe-2 , Intel Xeon E5-2692, TH Express-2, Xeon Phi, 2013 NUDT 3,120,000

33,863(= 33.9 PF)

54,902 17,808

3 Swiss Natl. Supercomputer Center, Switzerland

Piz DaintCray XC30/NVIDIA P100, 2013 Cray 361,760 19,590 33,863 2,272

4 JAMSTEC, JAPANGyoukou, ZettaScaler-2.2 HPC, Xeon D-1571, 2017, ExaScaler 19,860 19,136 28,192 1,350

5 Oak Ridge National Laboratory, USA

TitanCray XK7/NVIDIA K20x, 2012 Cray 560,640 17,590 27,113 8,209

6 Lawrence Livermore National Laboratory, USA

SequoiaBlueGene/Q, 2011 IBM 1,572,864 17,173 20,133 7,890

7 DOE/NNSA/LANL/SNLTrinity, Cray XC40 Intel Xeon Phi 7250 68C 1.4GHz, Cray Aries, 2017, Cray 979,968 14,137 43,903 3.844

8DOE/SC/LBNL/NERSCUSA

Cori , Cray XC40, Intel Xeon Phi 7250 68C 1.4GHz, Cray Aries, 2016 Cray

632,400 14,015 27,881 3,939

9Joint Center for Advanced High Performance Computing, Japan

Oakforest-PACS , PRIMERGY CX600 M1, Intel Xeon Phi Processor 7250 68C 1.4GHz, Intel Omni-Path, 2016 Fujitsu

557,056 13,555 24,914 2,719

10 RIKEN AICS, JapanK computer , SPARC64 VIIIfx , 2011 Fujitsu 705,024 10,510 11,280 12,660

50th TOP500 List (November, 2017)

Rmax: Performance of Linpack (TFLOPS)Rpeak: Peak Performance (TFLOPS), Power: kW

Page 23: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

23

http://www.hpcg-benchmark.org/

HPCG Ranking (November, 2017)

Computer CoresHPL Rmax(Pflop/s)

TOP500 Rank

HPCG (Pflop/s)

Peak

1 K computer 705,024 10.510 10 0.603 5.3%2 Tianhe-2 (MilkyWay-2) 3,120,000 33.863 2 0.580 1.1%3 Trinity 979,072 93.015 7 0.546 0.4%

4 Piz Daint 361,760 19.590 3 0.486 1.9%

5 Sunway TaihuLight 10,649,600 93.015 1 0.481 0.4%

6 Oakforest-PACS 557,056 13.555 9 0.386 1.5%7 Cori 632,400 13.832 8 0.355 1.3%

8 Sequoia 1,572,864 17.173 6 0.330 1.6%

9 Titan 560,640 17.590 4 0.322 1.2%

10 Tsbubame 3 136,080 8.125 13 0.189 1.6%

Page 24: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

24

Green 500 Ranking (November, 2016)

Site Computer CPUHPL

Rmax(Pflop/s)

TOP500 Rank

Power(MW)

GFLOPS/W

1NVIDIA Corporation

DGX SATURNV

NVIDIA DGX-1, Xeon E5-2698v4 20C 2.2GHz, Infiniband EDR, NVIDIA Tesla P100

3.307 28 0.350 9.462

2Swiss National Supercomputing Centre (CSCS)

Piz DaintCray XC50, Xeon E5-2690v3 12C 2.6GHz, Aries interconnect , NVIDIA Tesla P100

9.779 8 1.312 7.454

3 RIKEN ACCS Shoubu ZettaScaler-1.6 etc. 1.001 116 0.150 6.674

4National SC Center in Wuxi

Sunway TaihuLight

Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway

93.01 1 15.37 6.051

5SFB/TR55 at Fujitsu Tech.Solutions GmbH

QPACE3PRIMERGY CX1640 M1, Intel Xeon Phi 7210 64C 1.3GHz, Intel Omni-Path

0.447 375 0.077 5.806

6 JCAHPCOakforest-PACS

PRIMERGY CX1640 M1, Intel Xeon Phi 7250 68C 1.4GHz, Intel Omni-Path

1.355 6 2.719 4.986

7DOE/SC/ArgonneNational Lab.

ThetaCray XC40, Intel Xeon Phi 7230 64C 1.3GHz, Aries interconnect

5.096 18 1.087 4.688

8Stanford Research Computing Center

XStreamCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, NvidiaK80

0.781 162 0.190 4.112

9ACCMS, Kyoto University

Camphor 2Cray XC40, Intel Xeon Phi 7250 68C 1.4GHz, Aries interconnect

3.057 33 0.748 4.087

10Jefferson Natl. Accel. Facility

SciPhi XVIKOI Cluster , Intel Xeon Phi 7230 64C 1.3GHz, Intel Omni-Path

0.426 397 0.111 3.837

http://www.top500.org/

Page 25: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

25

Green 500 Ranking (June, 2017)Site Computer CPU

HPL Rmax(Pflop/s)

TOP500 Rank

Power(MW)

GFLOPS/W

1 Tokyo Tech. TSUBAME3.0 SGI ICE XA, IP139-SXM2, Xeon E5-2680v4, NVIDIA Tesla P100 SXM2, HPE

1,998.0 61 142 14.110

2 Yahoo Japan kukaiZettaScaler-1.6, Xeon E5-2650Lv4,, NVIDIA Tesla P100 , Exascalar

460.7 465 33 14.046

3 AIST, JapanAIST AI Cloud

NEC 4U-8GPU Server, Xeon E5-2630Lv4, NVIDIA Tesla P100 SXM2 , NEC

961.0 148 76 12.681

4CAIP, RIKEN, JAPAN

RAIDEN GPU subsystem -

NVIDIA DGX-1, Xeon E5-2698v4, NVIDIA Tesla P100 , Fujitsu

635.1 305 60 10.603

5 Univ.Cambridge, UK

Wilkes-2 -Dell C4130, Xeon E5-2650v4, NVIDIA Tesla P100 , Dell

1,193.0 100 114 10.428

6 Swiss Natl. SC. Center (CSCS)

Piz DaintCray XC50, Xeon E5-2690v3, NVIDIA Tesla P100 , Cray Inc.

19,590.0 3 2,272 10.398

7 JAMSTEC, Japan

Gyoukou,ZettaScaler-2.0 HPC system, Xeon D-1571, PEZY-SC2 , ExaScalar

1,677.1 69 164 10.226

8 Inst. for Env.Studies, Japan

GOSAT-2 (RCF2)

SGI Rackable C1104-GP1, Xeon E5-2650v4, NVIDIA Tesla P100 , NSSOL/HPE

770.4 220 79 9.797

9 Facebook, USAPenguin Relion

Xeon E5-2698v4/E5-2650v4, NVIDIA Tesla P100 , Acer Group

3,307.0 31 350 9.462

10 NVIDIA, USADGX Saturn V

Xeon E5-2698v4, NVIDIA Tesla P100 , Nvidia

3,307.0 32 350 9.462

11ITC, U.Tokyo, Japan

Reedbush-HSGI Rackable C1102-GP8, Xeon E5-2695v4, NVIDIA Tesla P100 SXM2 , HPE

802.4 203 94 8.575

http://www.top500.org/

Page 26: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

26

Green 500 Ranking (Nov., 2017)Site Computer CPU

HPL Rmax(Pflop/s)

TOP500 Rank

Power(kW)

GFLOPS/W

1 RIKEN, JapanShoubusystem B

ZettaScaler-2.2 HPC system, Xeon D-1571, PEZY-SC2 , ExaScalar

842.0 259 50 17.009

2 KEK, Japan Suiren2ZettaScaler-2.2 HPC system, Xeon D-1571, PEZY-SC2 , ExaScalar

788.2 307 47 16.759

3 PEZY, Japan SakuraZettaScaler-2.2 HPC system, Xeon E5-2618Lv3, PEZY-SC2 , ExaScalar

824.7 276 50 16.657

4 NVIDIA, USADGX Saturn V Volta

Xeon E5-2698v4, NVIDIA Tesla V100 , Nvidia

1,070.0 149 97 15.113

5 JAMSTEC, Japan

GyoukouZettaScaler-2.2 HPC system, Xeon D-1571, PEZY-SC2 , ExaScalar

19,135.8 4 1,350 14.173

6 Tokyo Tech. TSUBAME3.0 SGI ICE XA, IP139-SXM2, Xeon E5-2680v4, NVIDIA Tesla P100 SXM2, HPE

8,125.0 13 792 13.704

7 AIST, JapanAIST AI Cloud

NEC 4U-8GPU Server, Xeon E5-2630Lv4, NVIDIA Tesla P100 SXM2 , NEC

961.0 148 76 12.681

8 CAIP, RIKEN, JAPAN

RAIDEN GPU subsystem -

NVIDIA DGX-1, Xeon E5-2698v4, NVIDIA Tesla P100 , Fujitsu

635.1 305 60 10.603

9 Univ.Cambridge, UK

Wilkes-2 -Dell C4130, Xeon E5-2650v4, NVIDIA Tesla P100 , Dell

1,193.0 100 114 10.428

10 Swiss Natl. SC. Center (CSCS)

Piz DaintCray XC50, Xeon E5-2690v3, NVIDIA Tesla P100 , Cray Inc.

19,590.0 3 2,272 10.398

11ITC, U.Tokyo, Japan

Reedbush-LSGI Rackable C1102-GP8, Xeon E5-2695v4, NVIDIA Tesla P100 SXM2 , HPE

805.6 291 79 10.167

http://www.top500.org/

Page 27: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Computational ScienceThe 3rd Pillar of Science

• Theoretical & Experimental Science• Computational Science

– The 3rd Pillar of Science– Simulations using Supercomputers

27

The

oret

ical

Exp

erim

enta

l

Com

puta

tiona

l

Intro

Page 28: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Methods for Scientific Computing• Numerical solutions of PDE (Partial Diff. Equations)• Grids, Meshes, Particles

– Large-Scale Linear Equations– Finer meshes provide more accurate solutions

28

Intro

Page 29: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

3D Simulations for Earthquake Generation Cycle

San Andreas Faults, CA, USAStress Accumulation at Transcurrent Plate Boundaries

29

Intro

Page 30: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Adaptive FEM: High -resolution needed at meshes with large deformation (large accumulation)

30

Intro

Page 31: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

∆h=100km ∆h=50km ∆h=5km

[JAMSTEC]

Typhoon Simulations by FDMEffect of Resolution

31Intro

Page 32: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Simulation of Typhoon MANGKHUTin 2003 using the Earth Simulator

[JAMSTEC]

32Intro

Page 33: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Simulation of Geologic CO 2 Storage

[Dr. Hajime Yamamoto, Taisei]

33Intro

Page 34: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Simulation of Geologic CO 2 Storage

• International/Interdisciplinary Collaborations– Taisei (Science, Modeling)– Lawrence Berkeley National Laboratory,

USA (Modeling)– Information Technology Center, the

University of Tokyo (Algorithm, Software)– JAMSTEC (Earth Simulator Center)

(Software, Hardware)– NEC (Software, Hardware)

• 2010 Japan Geotechnical Society (JGS) Award

34Intro

Page 35: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Simulation of Geologic CO 2 Storage• Science

– Behavior of CO2 in supercritical state at deep reservoir• PDE’s

– 3D Multiphase Flow (Liquid/Gas) + 3D Mass Transfer• Method for Computation

– TOUGH2 code based on FVM, and developed by Lawrence Berkeley National Laboratory, USA

• More than 90% of computation time is spent for solving large-scale linear equations with more than 107 unknowns

• Numerical Algorithm– Fast algorithm for large-scale linear equations developed by

Information Technology Center, the University of Tokyo• Supercomputer

– Earth Simulator II (NEX SX9, JAMSTEC, 130 TFLOPS)– Oakleaf-FX (Fujitsu PRIMEHP FX10, U.Tokyo, 1.13 PFLOPS

35

Page 36: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

COPYRIGHT©TAISEI CORPORATION ALL RIGHTS RESERVED

Diffusion-Dissolution-Convection Process

36

• Buoyant scCO2 overrides onto groundwater• Dissolution of CO2 increases water density• Denser fluid laid on lighter fluid• Rayleigh-Taylor instability invokes convective

mixing of groundwater

The mixing significantly enhances the CO2

dissolution into groundwater, resulting in

more stable storage

Supercritical

CO2

Caprock (Low permeable seal)

Injection Well

Convective Mixing

Preliminary 2D simulation (Yamamoto et al., GHGT11) [Dr. Hajime Yamamoto, Taisei]

Page 37: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

COPYRIGHT©TAISEI CORPORATION ALL RIGHTS RESERVED

Density convections

for 1,000 years:

Flow Model

37

• The meter-scale fingers gradually developed to larger ones in the field-scale model

• Huge number of time steps (> 105) were required to complete the 1,000-yrs simulation

• Onset time (10-20 yrs) is comparable to theoretical (linear stability analysis, 15.5yrs)

Only the far side of the vertical cross section passing through the injection well is depicted.

[Dr. Hajime Yamamoto, Taisei]

Page 38: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

COPYRIGHT©TAISEI CORPORATION ALL RIGHTS RESERVED

Simulation of Geologic CO2 Storage

38

Fujitsu FX10 (Oakleaf-FX ),30M DOF: 2x-3x improvement

[Dr. Hajime Yamamoto, Taisei]

30 million DoF (10 million grids × 3 DoF/grid node)

0.1

1

10

100

1000

10000

10 100 1000 10000

Ca

lcu

lati

on

Tim

e (

sec)

Number of Processors

0.1

1

10

100

1000

10000

10 100 1000 10000

Ca

lcu

lati

on

Tim

e (

sec)

Number of Processors

TOUGH2-MP

on FX10

2-3 times speedup

Average time for solving

matrix for one time step

3D Multiphase Flow

(Liquid/Gas) + 3D

Mass Transfer

Page 39: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

39

Motivation for Parallel Computingagain

• Large -scale parallel computer enables fast computing in large -scale scientific simulations with detailed models. Computational science develops new frontiers of science and engineering.

• Why parallel computing ?– faster & larger– “larger” is more important from the view point of “new frontiers of

science & engineering”, but “faster” is also important.– + more complicated– Ideal: Scalable

• Solving Nx scale problem using Nx computational resources during same computation time (weak scaling)

• Solving a fix-sized problem using Nx computational resources in 1/N computation time (strong scaling)

Intro

Page 40: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

• Supercomputers and Computational Science• Overview of the Class• Future Issues

40Intro

Page 41: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Our Current Target: Multicore ClusterMulticore CPU’s are connected through network

41

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

Intro

• OpenMP Multithreading Intra Node (Intra CPU) Shared Memory

• MPI Message Passing Inter Node (Inter CPU) Distributed Memory

Memory Memory Memory Memory Memory

Page 42: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Our Current Target: Multicore ClusterMulticore CPU’s are connected through network

42

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

Intro

• OpenMP Multithreading Intra Node (Intra CPU) Shared Memory

• MPI Message Passing Inter Node (Inter CPU) Distributed Memory

Memory Memory Memory Memory Memory

Page 43: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Our Current Target: Multicore ClusterMulticore CPU’s are connected through network

43

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

Intro

• OpenMP Multithreading Intra Node (Intra CPU) Shared Memory

• MPI (after October) Message Passing Inter Node (Inter CPU) Distributed Memory

Memory Memory Memory Memory Memory

Page 44: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

44

Flat MPI vs. Hybrid

Hybrid :Hierarchal Structure

Flat-MPI:Each Core -> Independent

corecorecorecore

mem

ory core

corecorecore

mem

ory core

corecorecorem

emor

y corecorecorecorem

emor

y corecorecorecore

mem

ory core

corecorecore

mem

ory

mem

ory

mem

ory

mem

ory

core

core

core

core

core

core

core

core

core

core

core

core

Intro

• MPI only• Intra/Inter

Node

• OpenMP• MPI

Page 45: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Example of OpnMP/MPI HybridSending Messages to Neighboring Processes

MPI: Message Passing, OpenMP: Threading with Directives

Intro 45

!C!C– SEND

do neib= 1, NEIBPETOTII= (LEVEL-1)*NEIBPETOTistart= STACK_EXPORT(II+neib-1)inum = STACK_EXPORT(II+neib ) - istart

!$omp parallel dodo k= istart+1, istart+inum

WS(k-NE0)= X(NOD_EXPORT(k))enddo

call MPI_Isend (WS(istart+1-NE0), inum, MPI_DOUBLE_PRECISION, && NEIBPE(neib), 0, MPI_COMM_WORLD, && req1(neib), ierr)enddo

Page 46: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

46

• In order to make full use of modern supercomputer systems with multicore/manycore architectures, hybrid parallel programming with message-passing and multithreading is essentia l.

• MPI for message–passing and OpenMP for multithreading are the most popular ways for parallel programming on multicore/manycoreclusters.

• In this class, we “parallelize” a finite-volume method code with Krylov iterative solvers for Poisson’s equation on R eedbush-U Supercomputer with Intel Broadwell-EP at the Univer sity of Tokyo.

• Because of limitation of time, we are (mainly) focu sing on multithreading by OpenMP .

• ICCG solver (Conjugate Gradient iterative solvers with Incomplete Cholesky preconditioning) is a widely-used method for solving linear equations.

• Because it includes “data dependency” where writing/reading data to/from memory could occur simultaneously, parallelization using OpenMP is not straight forward.

Page 47: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

47

• We need certain kind of reordering in order to extract parallelism.

• Lectures and exercise on the following issues related to OpenMPwill be conducted:

Finite-Volume Method (FVM) Krylov Iterative Method Preconditioning Implementation of the Program Introduction to OpenMP Reordering/Coloring Method Parallel FVM Code using OpenMP

Page 48: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Intro-01 48

Date ID Title Apr-09 (M) CS-01 IntroductionApr-16 (M) CS-02 FVM (1/3)Apr-23 (M) CS-03 FVM (2/3)Apr-30 (M) (no class) (National Holiday)May-07 (M) CS-04 FVM (3/3)May-14 (M) CS-05 Login to Reedbush-U, OpenMP (1/3)May-21 (M) CS-06 OpenMP (2/3)May-28 (M) CS-07 OpenMP (3/3)Jun-04 (M) CS-08 Reordering (1/2)Jun-11 (M) CS-09 Reordering (2/2)Jun-18 (M) CS-10 TuningJun-25 (M) (canceled)Jul-02 (M) CS-11 Parallel Code by OpenMP (1/2)Jul-09 (M) CS-12 Parallel Code by OpenMP (2/2)Jul-23 (M) CS-13 Exercises

Page 49: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

“Prerequisites”

• Fundamental physics and mathematics– Linear algebra, analytics

• Experiences in fundamental numerical algorithms– LU factorization/decomposition, Gauss-Seidel

• Experiences in programming by C or Fortran• Experiences and knowledge in UNIX• User account of ECCS2016 must be obtained:

– http://www.ecc.u-tokyo.ac.jp/doc/announce/newuser.html

49

Page 50: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Strategy• If you can develop programs by yourself, it is ideal... but

difficult.– focused on “reading”, not developing by yourself– Programs are in C and Fortran

• Lectures are done by ...

• Lecture Materials– available at NOON Friday through WEB.

• http://nkl.cc.u-tokyo.ac.jp/17s/

– NO hardcopy is provided

• Starting at 08:30– You can enter the building after 08:00

• Taking seats from the front row.• Terminals must be shut-down after class.

50

Page 51: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Intro-01

Grades

• 1 or 2 Reports on programming

51

Page 52: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

If you have any questions, please feel free to contact me !

52

• Office: 3F Annex/Information Technology Center #36– 情報基盤センター別館3F 36号室

• ext.: 22719• e-mail: nakajima(at)cc.u-tokyo.ac.jp• NO specific office hours, appointment by e -mail

• http://nkl.cc.u-tokyo.ac.jp/18s/

• http://nkl.cc.u-tokyo.ac.jp/seminars/multicore/ 日本語資料(一部)

Page 53: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

53

Keywords for OpenMP

• OpenMP– Directive based, (seems to be) easy– Many books

• Data Dependency– Conflict of reading from/writing to memory– Appropriate reordering of data is needed for “consistent”

parallel computing– NO detailed information in OpenMP books: very complicated

Page 54: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

5454

Some Technical Terms• Processor, Core

– Processing Unit (H/W), Processor=Core for single-core proc’s

• Process– Unit for MPI computation, nearly equal to “core”– Each core (or processor) can host multiple processes (but not

efficient)

• PE (Processing Element)– PE originally mean “processor”, but it is sometimes used as

“process” in this class. Moreover it means “domain” (next)• In multicore proc’s: PE generally means “core”

• Domain– domain=process (=PE), each of “MD” in “SPMD”, each data set

• Process ID of MPI (ID of PE, ID of domain) starts from “0”– if you have 8 processes (PE’s, domains), ID is 0~7

Page 55: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

• Supercomputers and Computational Science• Overview of the Class• Future Issues

55Intro

Page 56: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Key-Issues towards Appl./Algorithms on Exa -Scale Systems

Jack Dongarra (ORNL/U. Tennessee) at ISC 2013

• Hybrid/Heterogeneous Architecture– Multicore + GPU/Manycores (Intel MIC/Xeon Phi)

• Data Movement, Hierarchy of Memory

• Communication/Synchronization Reducing Algorithms• Mixed Precision Computation• Auto-Tuning/Self-Adapting• Fault Resilient Algorithms• Reproducibility of Results

56Intro

Page 57: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Supercomputers with Heterogeneous/Hybrid Nodes

57

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

Intro

Page 58: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Hybrid Parallel Programming Model is essential for Post-Peta/Exascale

Computing• Message Passing (e.g. MPI) + Multi Threading (e.g.

OpenMP, CUDA, OpenCL, OpenACC etc.)• In K computer and FX10, hybrid parallel programming is

recommended– MPI + Automatic Parallelization by Fujitsu’s Compiler

• Expectations for Hybrid– Number of MPI processes (and sub-domains) to be reduced– O(108-109)-way MPI might not scale in Exascale Systems– Easily extended to Heterogeneous Architectures

• CPU+GPU, CPU+Manycores (e.g. Intel MIC/Xeon Phi)• MPI+X: OpenMP, OpenACC, CUDA, OpenCL

Intro 58

Page 59: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

This class is also useful for this type of parallel system

59

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

CPUCore

Core

Core

Core

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

GPUManycoreC C C C

C C C C

C C C C

C C C C

・・・・・・・・・

Intro

Page 60: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

Parallel Programming Models

• Multicore Clusters (e.g. K, FX10)– MPI + OpenMP and (Fortan/C/C++)

• Multicore + GPU (e.g. Tsubame)– GPU needs host CPU– MPI and [(Fortan/C/C++) + CUDA, OpenCL]

• complicated,

– MPI and [(Fortran/C/C++) with OpenACC]• close to MPI + OpenMP and (Fortran/C/C++)

• Multicore + Intel MIC/Xeon-Phi• Intel MIC/Xeon-Phi Only

– MPI + OpenMP and (Fortan/C/C++) is possible• + Vectorization

Intro 60

Page 61: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

61

Future of Supercomputers (1/2)• Technical Issues

– Power Consumption– Reliability, Fault Tolerance, Fault Resilience– Scalability (Parallel Performancce)

• Petascale System– 2MW including A/C, 2M$/year, O(105~106) cores

• Exascale System (103x Petascale)– After 2021 (?)

• 2GW (2 B$/year !), O(108~109) cores

– Various types of innovations are on-going• to keep power consumption at 20MW (100x efficiency)• CPU, Memory, Network ...

– Reliability

Page 62: Introduction to Parallel Programming for Multicore/Manycore ...nkl.cc.u-tokyo.ac.jp/18s/Introduction.pdfCray CS-Storm, Intel Xeon E5-2680v2 10C 2.8GHz, Infiniband FDR, Nvidia K80 0.781

62

Future of Supercomputers (2/2)

• Not only hardware, but also numerical models and algorithms must be improved:– 省電力(Power-Aware/Reducing Algorithms)– 耐故障(Fault Resilient Algorithms)– 通信削減(Communication Avoiding/Reducing Algorithms)

• Co-Design by experts from various area (SMASH) is important– Exascale system will be a special-purpose system, not a general-

purpose one.