Make It So !
A Tale of Two Needs
Dr. Robert W. WisniewskiChief Software Architect Extreme Scale ComputingSenior Principal Engineer, Intel Corporation
Smoky Mountains ConferenceSeptember 4, 2014
Copyright © 2014 Intel Corporation. All rights reserved.
1
Legal Disclaimer
Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance.
Intel, processors, chipsets, and desktop boards may contain design defects or errors known as errata, which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Intel, Intel Xeon, Intel Core microarchitecture, and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
*Other names and brands may be claimed as the property of others
Copyright © 2014, Intel Corporation. All rights reserved.
Motivation for this Talk
3
Your Supercomputer 25 Years Ago
hardware
kernel
compiler
Application
C O M P U T E | S T O R E | A N A L Y Z E
Cray XC30 Software Stack
4
5
Motivation for this Talk
• Building Web Pages• Recent PhD talk
• Turtles all the way down
• HW and SW getting more complex
6
Motivation for this Talk
• Building Web Pages• Recent PhD talk
• Rabbits all the way down
• HW and SW getting more complex
7
1701D
• Supposition: analogous to dark silicon, let the “free” cycles do the work
8
Make It So
Before Proceeding
• Not for everyone– Consider cut through high performance
• Good for new applications– Biological for example– Enables new ways of approaching the problem
• Delivering on the needs of the next machine, the next next machine, and the exascale machine
9
10
Exa
Peta
ZetaPEZExascale is only a point on the continuum
11
When investigations began• Challenges too great with current SW• Need all new OS, compiler, language…
Others advocated• Enhance capability of
existing
• Hard, drive evolutionary approach
OR
??
Extreme-Scale Software Challenge
Revolutionary versus Evolutionary
12
• Which one ?
Revolutionary Only
13
Imagine vendors telling their customers throw out everything you’ve done over the last 20+ years.Leverage tremendous investment in Intel Architecture ecosystem.
Evolutionary Only
14
But there are serious challenges getting to exascale. Drive new innovations and invigorate the x86 ecosystem.
The Real Extreme-Scale Software Challenge
15
• The real challenge in moving software to extreme scale, and therefore the real solution, will be figuring out how to incorporate and support existing computation paradigms in an evolutionary model while simultaneously supporting new revolutionary paradigms.
AND
1701D
• Supposition: analogous to dark silicon, let the “free” cycles do the work
16
Make It So
Maybe not so far fetched, done this a bit already
• Compilers
• Demand paging– No– Yes
17
mOS: multiple Operating System
18
• Run multiple OSes on node simultaneously• Linux API with LWK performance
• Kernel is configured for the application– Provides compatibility and performance/scalability/reliability
ComputeNode
Linux
I/O daemon
OSNode
N:1
LWKOSLinux
Application
HPC System
Platform Softwarepartitioned
glibc/shim
AppAssist
tools
Global OSResilienceDynamic
• Resolve as many faults as possible– Extra nodes swapped in automatically (proactively if possible)
– NVRAM accessible independent of node– Automatic checkpoint of DRAM to NVRAM
– For unresolvable provide fault information to application
19
CN CN
CN CN
CN CN
Linux API
Argo Hobbes
OSN
OSN
SMC
HPC
netw
ork
nvram
CN
SMC
SMC
Runtimes and Libraries
20
Effort Pyramid
Application (100+)
Runtime/Lib (10+)
HWLL SW(1+)
task i
datain
controlin
data control
task ntask n
task n
core core core core
task n
runtime• Over commit– Adaptive, asynchronous,
load balance, fault tolerance• Building block approach
Data Management for Big Data
• Smooth and automatic representation between– Application data structure in memory– Representation and access to NVRAM– Storage to disk
• Moving compute to data• Application makes system call
– make_permanent(*data), make_durable(*data)
21
main()
A[100][100][100];
graph_node { int value; edge e1;} RAM nvram
main()
22
It was the Best, It was the Worst of Times
• It was evolutionary, it was revolutionary
• It was research, it was development
Weaving the Two Tales (Needs) Together
– This is possible
23
Research
/Out-o
f-Box
Development/Evolutio
n & Revolutio
nSingle Unified Project
Time
Valu
e
Fundamental Discovery to Gain
Fundamental Insights
Technical Computing Continues Its Rapid Growth
Source: IDC: Worldwide Technical Computing Server 2013–2017 Forecast; Other brands, names, and images are the property of their respective owners.
Governments & Research Commercial/Industrial New Users – New Uses
Business Transformation
Big Data Analytics Enabling Data Driven Science
BetterProducts
Faster Timeto Market
ReducedR&D
From Diagnosis
to personal-
ized treatment
quicklyGenomics Clinical
Information
HPC: Transforming the world of data and information into KNOWLEDGE
“My goal is simple. It is complete understanding of the universe, why it is as it is and why it exists at all”
Stephen Hawking
Conclusion
– The Best of Both Worlds– Leverage PEZ(Y) cycles to provide higher level and more
productive abstractions to applications– We need to build tomorrow’s turtles/rabbits
– We will get to extreme scale (PEZ) by figuring out how to incorporate existing computation paradigms in an evolutionary model while simultaneously supporting new revolutionary paradigms
25
1701D
AND