![Page 1: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/1.jpg)
High Performance Computing Systems
Course structure and expectationsOverview
Doug Shook via Roger Chamberlain
![Page 2: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/2.jpg)
2
Instructor Doug Shook– Office: Jolley 534– Office Hours: Wednesdays 2:30-4
![Page 3: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/3.jpg)
3
Course Breakdown Website – www.cse.wustl.edu/~dshook/cse566
Homework – 80%– 4 over the course of the semester– Groups of 2– C/C++
Presentation – 20%– Must submit a proposal by 7/5
![Page 4: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/4.jpg)
4
Homework Moderate amounts of programming
More interested in answering (and asking!) questions:– How can we make this code run more efficiently?– What effects do different programming paradigms
have?– How can we measure code performance?
Experiment!
Report
![Page 5: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/5.jpg)
5
Policies Grading will be done on a straight scale– Curved if necessary (hint: it probably won't be)
Class attendance is not mandatory
Late work / Extensions
![Page 6: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/6.jpg)
6
Academic Dishonesty Collaboration is encouraged!
Over the line– Working in groups of more than 3– Showing your work to another group– Internet usage:• Finding sources, ideas, examples – OK• Copying text, ideas, code – Not OK
Zero Tolerance
![Page 7: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/7.jpg)
7
Why do we need HPC? Classic Example: Weather forecasting
– Hundreds of thousands of weather stations, all collecting large amounts of data
– Cubic Grid
![Page 8: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/8.jpg)
8
Weather Modeling Assuming that each grid point requires 100 floating point
instructions to process:– How many instructions for one point in time?– Hourly for 2 days?
What would that look like on a...– Pentium 4 (2000)?– Core i7 (2012)?– NCAR Weather Supercomputer?
And that's not even the entire planet....
![Page 9: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/9.jpg)
9
Grand Challenges Computing goals set in the late 80s by the government:– Fluid dynamics– Nature of matter– Symbolic computations– Evolution of galaxies– Blood flow through a heart
![Page 10: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/10.jpg)
10
Modern Examples IBM Sequoia – 3rd fastest super computer– Used for nuclear weapons simulations• How to dispose of old weapons?
WATSON– Jeopardy!– Human computer interaction
Pleiades– Owned and operated by NASA– Spaceflight simulations– Galaxy collisions
![Page 11: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/11.jpg)
11
HPC at Wash U Streaming computing– Autopipe – Dr. Chamberlain
Clusters– We're sitting in one
Med School– Supercomputer with 1800 cores, 19 TFLOPS
![Page 12: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/12.jpg)
12
Goals Parallel programs exist, but don't always perform well... – Why?– How can they be improved?– How can we reliably measure performance?
Programming Paradigms– Shared Memory, Message Passing, Streaming– How does the choice of a paradigm affect performance?
Application– What can we learn from existing HPC systems?– What problems exist with current methods?– What can we expect from future systems?
![Page 13: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/13.jpg)
13
Tools Profilers– gprof– top– operf– perf
Libraries / Frameworks– BLAS– OpenMP– MPI
![Page 14: High Performance Computing Systemsdshook/cse566/lectures/Intro.pdfHigh Performance Computing Systems ... – Autopipe – Dr. Chamberlain](https://reader030.vdocuments.mx/reader030/viewer/2022021419/5ab3e1dd7f8b9adc638b90a5/html5/thumbnails/14.jpg)
14
Tools Languages– C/C++– FORTRAN– CUDA / OpenCL– VHDL/Verilog
Platforms– CPU– GPU– FPGA– Hybrid
.