flinkml: large scale machine learning with apache flink
TRANSCRIPT
FlinkML: Large-scale Machine Learning with Apache FlinkTheodore Vasiloudis, SICS
SICS Data Science DayOctober 21st, 2015
Apache Flink
What is Apache Flink?
● Large-scale data processing engine● Easy and powerful APIs for batch and real-time streaming analysis● Backed by a very robust execution backend
○ true streaming dataflow engine○ custom memory manager○ native iterations○ cost-based optimizer
What is Apache Flink?
What does Flink give us?
● Expressive APIs● Pipelined stream processor● Closed loop iterations
Expressive APIs
● Main distributed data abstraction: DataSet● Program using functional-style transformations, creating a Dataflow.
case class Word(word: String, frequency: Int)
val lines: DataSet[String] = env.readTextFile(...)
lines.flatMap(line => line.split(“ “).map(word => Word(word, 1)).groupBy(“word”).sum(“frequency”).print()
Pipelined Stream Processor
Iterate in the Dataflow
Iterate by looping
● Loop in client submits one job per iteration step● Reuse data by caching in memory or disk
Iterate in the Dataflow
Delta iterations
Delta iterations
Learn more in Vasia’s Gelly talk!
Large-scale Machine Learning
What do we mean?
What do we mean?
● Small-scale learning ● Large-scale learning
Source: Léon Bottou
What do we mean?
● Small-scale learning○ We have a small-scale learning problem
when the active budget constraint is the number of examples.
● Large-scale learning
Source: Léon Bottou
What do we mean?
● Small-scale learning○ We have a small-scale learning problem
when the active budget constraint is the number of examples.
● Large-scale learning○ We have a large-scale learning problem
when the active budget constraint is the computing time.
Source: Léon Bottou
What do we mean?
● What about the complexity of the problem?
What do we mean?
● What about the complexity of the problem?
Source: Wired Magazine
Deep learning
What do we mean?
● What about the complexity of the problem?
“When you get to a trillion [parameters], you’re getting to something that’s got a chance of really understanding some stuff.” - Hinton, 2013
Source: Wired Magazine
What do we mean?
● We have a large-scale learning problem when the active budget constraint is the computing time and/or the model complexity.
FlinkML
FlinkML
● New effort to bring large-scale machine learning to Flink
FlinkML
● New effort to bring large-scale machine learning to Flink● Goals:
○ Truly scalable implementations○ Keep glue code to a minimum○ Ease of use
FlinkML: Overview
● Supervised Learning○ Optimization framework○ SVM○ Multiple linear regression
FlinkML: Overview
● Supervised Learning○ Optimization framework○ SVM○ Multiple linear regression
● Recommendation○ Alternating Least Squares (ALS)
FlinkML: Overview
● Supervised Learning○ Optimization framework○ SVM○ Multiple linear regression
● Recommendation○ Alternating Least Squares (ALS)
● Pre-processing○ Polynomial features○ Feature scaling
FlinkML: Overview
● Supervised Learning○ Optimization framework○ SVM○ Multiple linear regression
● Recommendation○ Alternating Least Squares (ALS)
● Pre-processing○ Polynomial features○ Feature scaling
● sklearn-like ML pipelines
FlinkML API
// LabeledVector is a feature vector with a label (class or real value)val trainingData: DataSet[LabeledVector] = ...val testingData: DataSet[Vector] = ...
FlinkML API
// LabeledVector is a feature vector with a label (class or real value)val trainingData: DataSet[LabeledVector] = ...val testingData: DataSet[Vector] = ...
val mlr = MultipleLinearRegression() .setStepsize(0.01) .setIterations(100) .setConvergenceThreshold(0.001)
FlinkML API
// LabeledVector is a feature vector with a label (class or real value)val trainingData: DataSet[LabeledVector] = ...val testingData: DataSet[Vector] = ...
val mlr = MultipleLinearRegression() .setStepsize(0.01) .setIterations(100) .setConvergenceThreshold(0.001)
mlr.fit(trainingData)
FlinkML API
// LabeledVector is a feature vector with a label (class or real value)val trainingData: DataSet[LabeledVector] = ...val testingData: DataSet[Vector] = ...
val mlr = MultipleLinearRegression() .setStepsize(0.01) .setIterations(100) .setConvergenceThreshold(0.001)
mlr.fit(trainingData)
// The fitted model can now be used to make predictionsval predictions: DataSet[LabeledVector] = mlr.predict(testingData)
FlinkML Pipelines
val scaler = StandardScaler()val polyFeatures = PolynomialFeatures().setDegree(3)val mlr = MultipleLinearRegression()
FlinkML Pipelines
val scaler = StandardScaler()val polyFeatures = PolynomialFeatures().setDegree(3)val mlr = MultipleLinearRegression()
// Construct pipeline of standard scaler, polynomial features and multiple linear // regressionval pipeline = scaler.chainTransformer(polyFeatures).chainPredictor(mlr)
FlinkML Pipelines
val scaler = StandardScaler()val polyFeatures = PolynomialFeatures().setDegree(3)val mlr = MultipleLinearRegression()
// Construct pipeline of standard scaler, polynomial features and multiple linear // regressionval pipeline = scaler.chainTransformer(polyFeatures).chainPredictor(mlr)
// Train pipelinepipeline.fit(trainingData)
// Calculate predictionsval predictions = pipeline.predict(testingData)
State of the art in large-scale ML
Alternating Least Squares
R ≅ X Y✕Users
Items
Naive Alternating Least Squares
Blocked Alternating Least Squares
Blocked ALS performance
FlinkML blocked ALS performance
Going beyond SGD in large-scale optimization
● Beyond SGD → Use Primal-Dual framework
● Slow updates → Immediately apply local updates
● Average over batch size → Average over K (nodes) << batch size
CoCoA: Communication Efficient Coordinate Ascent
Primal-dual framework
Source: Smith (2014)
Primal-dual framework
Source: Smith (2014)
Immediately Apply Updates
Source: Smith (2014)
Immediately Apply Updates
Source: Smith (2014)Source: Smith (2014)
Average over nodes (K) instead of batches
Source: Smith (2014)
CoCoA: Communication Efficient Coordinate Ascent
CoCoA performance
Source:Jaggi (2014)
CoCoA performance
Available on FlinkML
SVM
Achieving model parallelism:The parameter server
● The parameter server is essentially a distributed key-value store with two
basic commands: push and pull○ push updates the model
○ pull retrieves a (lazily) updated model
● Allows us to store a model into multiple nodes, read and update it as
needed.
Architecture of a parameter server communicating with groups of workers.
Source: Li (2014)
Comparison with other large-scale learning systems.
Source: Li (2014)
Dealing with stragglers: SSP Iterations
● BSP: Bulk Synchronous parallel○ Every worker needs to wait for the others to finish before starting the next iteration
Dealing with stragglers: SSP Iterations
● BSP: Bulk Synchronous parallel○ Every worker needs to wait for the others to finish before starting the next iteration
● ASP: Asynchronous parallel○ Every worker can work individually, update model as needed.
Dealing with stragglers: SSP Iterations
● BSP: Bulk Synchronous parallel○ Every worker needs to wait for the others to finish before starting the next iteration
● ASP: Asynchronous parallel○ Every worker can work individually, update model as needed.○ Can be fast, but can often diverge.
Dealing with stragglers: SSP Iterations
● BSP: Bulk Synchronous parallel○ Every worker needs to wait for the others to finish before starting the next iteration
● ASP: Asynchronous parallel○ Every worker can work individually, update model as needed.○ Can be fast, but can often diverge.
● SSP: State Synchronous parallel○ Relax constraints, so slowest workers can be up to K iterations behind fastest ones.
Dealing with stragglers: SSP Iterations
● BSP: Bulk Synchronous parallel○ Every worker needs to wait for the others to finish before starting the next iteration
● ASP: Asynchronous parallel○ Every worker can work individually, update model as needed.○ Can be fast, but can often diverge.
● SSP: State Synchronous parallel○ Relax constraints, so slowest workers can be up to K iterations behind fastest ones.○ Allows for progress, while keeping convergence guarantees.
Dealing with stragglers: SSP Iterations
Dealing with stragglers: SSP Iterations
Source: Ho et al. (2013)
SSP Iterations in Flink: Lasso Regression
Source: Peel et al. (2015)
SSP Iterations in Flink: Lasso Regression
Source: Peel et al. (2015)
To be merged soon
into FlinkML
Current and future work on FlinkML
Coming soon
● Tooling○ Evaluation & cross-validation framework○ Predictive Model Markup Language
● Algorithms○ Quad-tree kNN search○ Efficient streaming decision trees○ k-means and extensions○ Colum-wise statistics, histograms
FlinkML Roadmap
● Hyper-parameter optimization● More communication-efficient optimization algorithms● Generalized Linear Models● Latent Dirichlet Allocation
Future of Machine Learning on Flink
● Streaming ML○ Flink already has SAMOA bindings.○ We plan to kickstart the streaming ML library of Flink, and develop new algorithms.
Future of FlinkML
● Streaming ML○ Flink already has SAMOA bindings.○ We plan to kickstart the streaming ML library of Flink, and develop new algorithms.
● “Computation efficient” learning○ Utilize hardware and develop novel systems and algorithms to achieve large-scale learning
with modest computing resources.
Recent large-scale learning systems
Source: Xing (2015)
Recent large-scale learning systems
Source: Xing (2015)
How to get here?
Demo?
Thank you.
References
● Flink Project: flink.apache.org● FlinkML Docs: https://ci.apache.org/projects/flink/flink-docs-master/libs/ml/● Leon Botou: Learning with Large Datasets● Wired: Computer Brain Escapes Google's X Lab to Supercharge Search● Smith: CoCoA AMPCAMP Presentation● CMU Petuum: Petuum Project● Jaggi (2014): “Communication-efficient distributed dual coordinate ascent." NIPS 2014.● Li (2014): "Scaling distributed machine learning with the parameter server." OSDI 2014.● Ho (2013): "More effective distributed ML via a stale synchronous parallel parameter server." NIPS
2013.● Peel (2015): “Distributed Frank-Wolfe under Pipelined Stale Synchronous Parallelism”, IEEE BigData
2015● Xing (2015): “Petuum: A New Platform for Distributed Machine Learning on Big Data”, KDD 2015
I would like to thank professor Eric Xing for his permission to use parts of the structure from his great tutorial on large-scale machine learning: A New Look at the System, Algorithm and Theory Foundations of Distributed Machine Learning
“Demo”
“Demo”
“Demo”
“Demo”
“Demo”
“Demo”
“Demo”
“Demo”