boosting spark performance: an overview of techniques
TRANSCRIPT
Boosting Spark Performance: An Overview of Techniques
Ahsan Javed Awan
MotivationAbout me
● Erasmus Mundus Joint Doctoral Fellow at KTH Sweden and UPC Spain.● Visiting Researcher at Barcelona Super Computing Center.● Speaker at Spark Summit Europe 2016.● Written Licentiate Thesis, “Performance Characterization of In-Memory Data Analytics
with Apache Spark”● https://www.kth.se/profile/ajawan/
MotivationWhy should you listen?
● What's new in Apache Spark 2.0● Phase 1: Memory Management and Cache-aware algorithms● Phase 2: Whole-stage Codegen and Columnar In-Memory Support
● How to get better performance by ● Choosing and Tuning GC.● Using multiple executors each with the heap size of not more than 32 GB. ● Exploiting Data Locality on DRAM nodes.● Turning off Hardware Pre-fetchers● Keeping the Hyper-Threading on
MotivationApache Spark Philosophy?
MotivationCont...
I
*Source: http://navcode.info/2012/12/24/cloud-scaling-schemes/
Phoenix ++,Metis, Ostrich, etc..
Hadoop, Spark,Flink, etc..
MotivationCont..
*Source: SGI
● Exponential increase in core count.● A mismatch between the characteristics of emerging big data workloads and the
underlying hardware.● Newer promising technologies (Hybrid Memory Cubes, NVRAM etc)
● Clearing the clouds, ASPLOS' 12● Characterizing data analysis
workloads, IISWC' 13● Understanding the behavior of in-
memory computing workloads, IISWC' 14
Substantially improve the memory and CPU efficiency of Spark backend execution and push performance closer to the limits of modern hardware.
Goals of Project Tungsten
Phase 1
Foundation
Memory Management
Code Generation
Cache-aware Algorithms
Phase 2
Order-of-magnitude Faster
Whole-stage Codegen
Vectorization
Cont..
Perform explicit memory management instead of relying on Java objects• Reduce memory footprint• Eliminate garbage collection overheads• Use sun.misc.unsafe rows and off heap memory
Code generation for expression evaluation• Reduce virtual function calls and interpretation overhead
Cache conscious sorting• Reduce bad memory access patterns
Summary of Phase I
Progress Meeting 12-12-14Which Benchmarks ?
Our Hardware Configuration
Which Machine ?
Intel's Ivy Bridge Server
Performance of Cache-aware algorithms ?
DataFrame exhibit 25% less back-end bound stalls 64% less DRAM bound stalled cycles
25% less BW consumption10% less starvation of execution resources
Difficult to get order of magnitude performance speed ups with profiling techniques• For 10x improvement, would need of find top hotspots that add up to
90% and make them instantaneous• For 100x, 99%
Instead, look bottom up, how fast should it run?
Phase 2
Scan
Filter
Project
Aggregate
select count(*) from store_saleswhere ss_item_sk = 1000
Cont..
Standard for 30 years: almost all databases do it
Each operator is an “iterator” that consumes records from its input operator
class Filter( child: Operator, predicate: (Row => Boolean)) extends Operator { def next(): Row = { var current = child.next() while (current == null ||predicate(current)) { current = child.next() } return current }}
Volcano Iterator Model
select count(*) from store_saleswhere ss_item_sk = 1000
long count = 0;for (ss_item_sk in store_sales) { if (ss_item_sk == 1000) { count += 1; }}
Hand Written Code
Volcano 13.95 millionrows/sec
collegefreshman
125 millionrows/sec
Note: End-to-end, single thread, single column, and data originated in Parquet on disk
High throughput
Volcano Model vs Hand Written Code
Volcano Model1. Too many virtual function calls
2. Intermediate data in memory (or L1/L2/L3 cache)
3. Can’t take advantage of modern CPU features -- no loop unrolling, SIMD, pipelining, prefetching, branch prediction etc.
Hand-written code
1. No virtual function calls
2. Data in CPU registers
3. Compiler loop unrolling, SIMD, pipelining
Volcano vs Hand Written Code
Fusing operators together so the generated code looks like hand optimized code:
- Identify chains of operators (“stages”)- Compile each stage into a single function- Functionality of a general purpose execution engine;
performance as if hand built system just to run your query
Whole-Stage Codegen
mike
In-memoryRow Format
1 john 4.1
2 3.5
3 sally 6.4
1 2 3
john mike sally
4.1 3.5 6.4
In-memoryColumn Format
Columnar In-Memory
1. More efficient: denser storage, regular data access, easier to index into
2. More compatible: Most high-performance external systems are already columnar (numpy, TensorFlow, Parquet); zero serialization/copy to work with them
3. Easier to extend: process encoded data
Why Columnar?
Parquet 11 millionrows/sec
Parquetvectorized
90 millionrows/sec
Note: End-to-end, single thread, single column, and data originated in Parquet on disk
High throughput
Phase 1
Spark 1.4 - 1.6
Memory Management
Code Generation
Cache-aware Algorithms
Phase 2
Spark 2.0+
Whole-stage Code Generation
Columnar in Memory Support
Both whole stage codegen [SPARK-12795] and the vectorizedparquet reader [SPARK-12992] are enabled by default in Spark 2.0+
5-30xSpeedups
Operator Benchmarks: Cost/Row (ns)
1. SPARK-16026: Cost Based Optimizer- Leverage table/column level statistics to optimize joins and aggregates
- Statistics Collection Framework (Spark 2.1)
- Cost Based Optimizer (Spark 2.2)
2. Boosting Spark’s Performance on Many-Core Machines- Qifan’s Talk Today at 2:55pm (Research Track)
- In-memory/ single node shuffle
3. Improving quality of generated code and better integration with the in-memory column format in Spark
Spark 2.1, 2.2 and beyond
MotivationThe choice of Garbage Collector impact the data processing
capability of the system
Improvement in DPS ranges from 1.4x to 3.7x on average in Parallel Scavenge as compared to G1
Our ApproachMultiple Small executors instead of single large executor
Multiple small executors can provide up-to 36% performance gain
Our ApproachNUMA Awareness
NUMA Awareness results in 10% speed up on average
Our ApproachHyper Threading is effective
Hyper threading reduces the DRAM bound stalls by 50%
Our ApproachDisable next-line prefetchers
Disabling next-line prefetchers can improve the performance by 15%
Our ApproachFurther Reading
● Performance characterization of in-memory data analytics on a modern cloud server, in 5th IEEE Conference on Big Data and Cloud Computing, 2015 (Best Paper Award).
● How Data Volume Affects Spark Based Data Analytics on a Scale-up Server in 6th Workshop on Big Data Benchmarks, Performance Optimization and Emerging Hardware (BpoE), held in conjunction with VLDB 2015, Hawaii, USA .
● Micro-architectural Characterization of Apache Spark on Batch and Stream Processing Workloads, in 6th IEEE Conference on Big Data and Cloud Computing, 2016.
● Node Architecture Implications for In-Memory Data Analytics in Scale-in Clusters in 3rd IEEE/ACM Conference in Big Data Computing, Applications and Technologies, 2016.
● Implications of In-Memory Data Analytics with Apache Spark on Near Data Computing Architectures (under submission).
THANK YOU.
Acknowledgements: Sameer Agarwal for Project Tugsten slides