carlos guestrin

90
Carnegie Mellon Universit Carlos Guestrin Yucheng Low Aapo Kyrola Danny Bickson Joe Hellerstein Alex Smola el Machine Learning for Large-Scale Natural Jay Gu 2 Joseph Gonzalez The GraphLab Team:

Upload: elpida

Post on 04-Feb-2016

39 views

Category:

Documents


0 download

DESCRIPTION

2. Parallel Machine Learning for Large-Scale Natural Graphs. Carlos Guestrin. The GraphLab Team:. Yucheng Low. Joseph Gonzalez. Aapo Kyrola. Danny Bickson. Joe Hellerstein. Jay Gu. Alex Smola. Parallelism is Difficult. Wide array of different parallel architectures: - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Carlos  Guestrin

Carnegie Mellon University

Carlos Guestrin

YuchengLow

AapoKyrola

DannyBickson

JoeHellerstein

AlexSmola

Parallel Machine Learning for Large-Scale Natural Graphs

JayGu

2

JosephGonzalez

The GraphLab Team:

Page 2: Carlos  Guestrin

Parallelism is DifficultWide array of different parallel architectures:

Different challenges for each architecture

GPUs Multicore Clusters Clouds Supercomputers

High Level Abstractions to make things easier

Page 3: Carlos  Guestrin

How will wedesign and implement

parallel learning systems?

Page 4: Carlos  Guestrin

Map-Reduce / HadoopBuild learning algorithms on-top of

high-level parallel abstractions

... a popular answer:

Page 5: Carlos  Guestrin

BeliefPropagation

Label Propagation

KernelMethods

Deep BeliefNetworks

NeuralNetworks

Tensor Factorization

PageRank

Lasso

Map-Reduce for Data-Parallel MLExcellent for large data-parallel tasks!

Data-Parallel Graph-Parallel

CrossValidation

Feature Extraction

Map Reduce

Computing SufficientStatistics

Page 6: Carlos  Guestrin

Example of Graph Parallelism

Page 7: Carlos  Guestrin

PageRank ExampleIterate:

Where:α is the random reset probabilityL[j] is the number of links on page j

1 32

4 65

Page 8: Carlos  Guestrin

Properties of Graph Parallel Algorithms

DependencyGraph

IterativeComputation

My Rank

Friends Rank

LocalUpdates

Page 9: Carlos  Guestrin

BeliefPropagation

SVM

KernelMethods

Deep BeliefNetworks

NeuralNetworks

Tensor Factorization

PageRank

Lasso

Addressing Graph-Parallel MLWe need alternatives to Map-Reduce

Data-Parallel Graph-Parallel

CrossValidation

Feature Extraction

Map Reduce

Computing SufficientStatistics

Map Reduce?Pregel (Giraph)?

Page 10: Carlos  Guestrin

Barrie

rPregel (Giraph)

Bulk Synchronous Parallel Model:

Compute Communicate

Page 11: Carlos  Guestrin

Bulk synchronous computation can be

highly inefficient

Problem:

Page 12: Carlos  Guestrin

BSP Systems Problem:Curse of the Slow Job

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

Data

CPU 1

CPU 2

CPU 3

CPU 1

CPU 2

CPU 3

Data

Data

Data

Data

Data

Data

Data

CPU 1

CPU 2

CPU 3

Iterations

Barr

ier

Barr

ier

Data

Data

Data

Data

Data

Data

Data

Barr

ier

Page 13: Carlos  Guestrin

Bulk synchronous computation model provably inefficient for some ML tasks

Page 14: Carlos  Guestrin

BSP ML Problem: Data-Parallel Algorithms can be Inefficient

Limitations of bulk synchronous model can lead to provably inefficient parallel algorithms

1 2 3 4 5 6 7 80

100020003000400050006000700080009000

Number of CPUs

Runti

me

in S

econ

ds

Bulk Synchronous (Pregel)

Asynchronous Splash BP

But distributed Splash BP was built from scratch… efficient, parallel implementation was painful,

painful, painful to achieve

Page 15: Carlos  Guestrin

BeliefPropagationSVM

KernelMethods

Deep BeliefNetworks

NeuralNetworks

Tensor Factorization

PageRank

Lasso

The Need for a New AbstractionIf not Pregel, then what?

Data-Parallel Graph-Parallel

CrossValidation

Feature Extraction

Map Reduce

Computing SufficientStatistics

Pregel (Giraph)

Page 16: Carlos  Guestrin

The GraphLab SolutionDesigned specifically for ML needs

Express data dependenciesIterative

Simplifies the design of parallel programs:Abstract away hardware issuesAutomatic data synchronizationAddresses multiple hardware architectures

MulticoreDistributedCloud computing GPU implementation in progress

Page 17: Carlos  Guestrin

What is GraphLab?

Page 18: Carlos  Guestrin

The GraphLab Framework

Scheduler Consistency Model

Graph BasedData Representation

Update FunctionsUser Computation

Page 19: Carlos  Guestrin

Data GraphA graph with arbitrary data (C++ Objects) associated with each vertex and edge.

Vertex Data:• User profile text• Current interests estimates

Edge Data:• Similarity weights

Graph:• Social Network

Page 20: Carlos  Guestrin

pagerank(i, scope){ // Get Neighborhood data (R[i], Wij, R[j]) scope;

// Update the vertex data

// Reschedule Neighbors if needed if R[i] changes then reschedule_neighbors_of(i); }

;][)1(][][

iNj

ji jRWiR

Update FunctionsAn update function is a user defined program which when applied to a vertex transforms the data in the scope of the vertex

Dynamic computation

Page 21: Carlos  Guestrin

The Scheduler

CPU 1

CPU 2

The scheduler determines the order that vertices are updated

e f g

kjih

dcba b

ih

a

i

b e f

j

c

Sch

edule

r

The process repeats until the scheduler is empty

Page 22: Carlos  Guestrin

The GraphLab Framework

Scheduler Consistency Model

Graph BasedData Representation

Update FunctionsUser Computation

Page 23: Carlos  Guestrin

Ensuring Race-Free CodeHow much can computation overlap?

Page 24: Carlos  Guestrin

Need for Consistency?

No Consistency

Higher Throughput

(#updates/sec)

Potentially Slower Convergence of ML

Page 25: Carlos  Guestrin

Inconsistent ALS

0 2000000 4000000 6000000 8000000

2

20 Dynamic Inconsistent

Dynamic

Updates

Trai

n RM

SE

Netflix data, 8 cores

Consistent

Page 26: Carlos  Guestrin

Even Simple PageRank can be Dangerous

GraphLab_pagerank(scope) {ref sum = scope.center_valuesum = 0forall (neighbor in scope.in_neighbors )

sum = sum + neighbor.value / nbr.num_out_edges

sum = ALPHA + (1-ALPHA) * sum…

Page 27: Carlos  Guestrin

Inconsistent PageRank

Page 28: Carlos  Guestrin

Even Simple PageRank can be Dangerous

GraphLab_pagerank(scope) {ref sum = scope.center_valuesum = 0forall (neighbor in scope.in_neighbors)

sum = sum + neighbor.value / nbr.num_out_edges

sum = ALPHA + (1-ALPHA) * sum…

CPU 1 CPU 2Read

Read-write race CPU 1 reads bad PageRank estimate, as CPU 2 computes value

Page 29: Carlos  Guestrin

Race Condition Can Be Very SubtleGraphLab_pagerank(scope) {

ref sum = scope.center_valuesum = 0forall (neighbor in scope.in_neighbors)

sum = sum + neighbor.value / neighbor.num_out_edges

sum = ALPHA + (1-ALPHA) * sum…

GraphLab_pagerank(scope) {sum = 0forall (neighbor in scope.in_neighbors)

sum = sum + neighbor.value / nbr.num_out_edges

sum = ALPHA + (1-ALPHA) * sumscope.center_value = sum …

Uns

tabl

eSt

able

This was actually encountered in user code.

Page 30: Carlos  Guestrin

GraphLab Ensures Sequential Consistency

For each parallel execution, there exists a sequential execution of update functions which produces the same result.

CPU 1

CPU 2

SingleCPU

Parallel

Sequential

time

Page 31: Carlos  Guestrin

Consistency Rules

Guaranteed sequential consistency for all update functions

Data

Page 32: Carlos  Guestrin

Full Consistency

Page 33: Carlos  Guestrin

Obtaining More Parallelism

Page 34: Carlos  Guestrin

Edge Consistency

CPU 1 CPU 2

Safe

Read

Page 35: Carlos  Guestrin

The GraphLab Framework

Scheduler Consistency Model

Graph BasedData Representation

Update FunctionsUser Computation

Page 36: Carlos  Guestrin

Bayesian Tensor Factorization

Gibbs SamplingDynamic Block Gibbs Sampling

MatrixFactorization

Lasso

SVM

Belief PropagationPageRank

CoEM

K-Means

SVD

LDA

…Many others…Linear Solvers

Splash SamplerAlternating Least

Squares

Page 37: Carlos  Guestrin

GraphLab vs. Pregel (BSP)

Multicore PageRank (25M Vertices, 355M Edges)

0 10 20 30 40 50 60 701

100

10000

1000000

100000000

Number of Updates

Num

-Ver

tices 51% updated only once

02000

40006000

800010000

1200014000

1.00E-02

1.00E+00

1.00E+02

1.00E+04

1.00E+06

1.00E+08

Runtime (s)

L1 E

rror

GraphLab

Pregel(via GraphLab)

0.0E+00 5.0E+08 1.0E+09 1.5E+09 2.0E+091.00E-02

1.00E+00

1.00E+02

1.00E+04

1.00E+06

1.00E+08

Updates

L1 E

rror

GraphLab

Pregel(via GraphLab)

Page 38: Carlos  Guestrin

CoEM (Rosie Jones, 2005)Named Entity Recognition Task

the dog

Australia

Catalina Island

<X> ran quickly

travelled to <X>

<X> is pleasant

Hadoop 95 Cores 7.5 hrs

Is “Dog” an animal?Is “Catalina” a place?

Vertices: 2 MillionEdges: 200 Million

Page 39: Carlos  Guestrin

0 2 4 6 8 10 12 14 160

2

4

6

8

10

12

14

16

Number of CPUs

Spee

dup

Bett

er

Optimal

GraphLab CoEM

CoEM (Rosie Jones, 2005)

51

GraphLab 16 Cores 30 min

15x Faster!6x fewer CPUs!

Hadoop 95 Cores 7.5 hrs

Page 40: Carlos  Guestrin

Carnegie Mellon

GraphLab in the Cloud

Page 41: Carlos  Guestrin

CoEM (Rosie Jones, 2005)

Optimal

0 2 4 6 8 10 12 14 160

2

4

6

8

10

12

14

16

Number of CPUs

Sp

eed

up

Bett

er

Small

Large

GraphLab 16 Cores 30 min

Hadoop 95 Cores 7.5 hrs

GraphLabin the Cloud

32 EC2 machines

80 secs

0.3% of Hadoop time

Page 42: Carlos  Guestrin

Video Cosegmentation

Segments mean the same

Model: 10.5 million nodes, 31 million edges

Gaussian EM clustering + BP on 3D grid

Page 43: Carlos  Guestrin

Video Coseg. Speedups

GraphLab

Ideal

Page 44: Carlos  Guestrin

Video Coseg. Speedups

GraphLab

Ideal

Page 45: Carlos  Guestrin

Cost-Time Tradeoff video co-segmentation results

more machines, higher cost

fast

er

a few machines helps a lot

diminishingreturns

Page 46: Carlos  Guestrin

Netflix Collaborative FilteringAlternating Least Squares Matrix Factorization

Model: 0.5 million nodes, 99 million edges

Netflix

Users

Movies

DHadoopMPI

GraphLab

Ideal

D=100

D=20

Page 47: Carlos  Guestrin

Multicore Abstraction Comparison

Netflix Matrix Factorization

0 2000000 4000000 6000000 8000000 10000000-0.036

-0.034

-0.032

-0.03

-0.028

-0.026

-0.024

-0.022

-0.02

DynamicRound Robin

Updates

Log

Test

Err

or

Dynamic Computation,Faster Convergence

Page 48: Carlos  Guestrin

The Cost of Hadoop

Page 49: Carlos  Guestrin

Carnegie Mellon University

Fault Tolerance

Page 50: Carlos  Guestrin

Fault-ToleranceLarger Problems Increased chance of Machine Failure

GraphLab2 Introduces two fault tolerance (checkpointing) mechanisms

Synchronous SnapshotsChandi-Lamport Asynchronous Snapshots

Page 51: Carlos  Guestrin

Synchronous Snapshots

Run GraphLab Run GraphLab

Barrier + Snapshot

Tim

e

Run GraphLab Run GraphLab

Barrier + Snapshot

Run GraphLab Run GraphLab

Page 52: Carlos  Guestrin

Curse of the slow machine

sync.Snapshot

No Snapshot

Page 53: Carlos  Guestrin

Curse of the Slow Machine

Run GraphLab

Run GraphLab

Tim

e

Barrier + Snapshot

Run GraphLabRun GraphLab

Page 54: Carlos  Guestrin

Curse of the slow machine

sync.Snapshot

No Snapshot

Delayed sync.Snapshot

Page 55: Carlos  Guestrin

Asynchronous Snapshots

struct chandy_lamport { void operator()(icontext_type& context) {

save(context.vertex_data()); foreach ( edge_type edge, context.in_edges() )

{if (edge.source() was not marked as

saved) {save(context.edge_data(edge));context.schedule(edge.source(),

chandy_lamport());}

}... Repeat for context.out_edgesMark context.vertex() as saved;

}};

Chandy Lamport algorithm implementable as a GraphLab update function! Requires edge consistency

Page 56: Carlos  Guestrin

Snapshot Performance

Async.Snapshot

sync.Snapshot

No Snapshot

Page 57: Carlos  Guestrin

Snapshot with 15s fault injection

No SnapshotAsync.

Snapshot

sync.Snapshot

Halt 1 out of 16 machines 15s

Page 58: Carlos  Guestrin

Why do we need to update the GraphLab

Abstraction?

Page 59: Carlos  Guestrin

Natural Graphs

Page 60: Carlos  Guestrin

Natural Graphs Power Law

Top 1% of vertices is adjacent to

53% of the edges!

Yahoo! Web Graph: 1.4B Verts, 6.7B Edges

“Power Law”

Page 61: Carlos  Guestrin

Problem: High Degree Vertices

High degree vertices limit parallelism:

Touch a LargeAmount of State

Requires Heavy Locking

Processed Sequentially

Page 62: Carlos  Guestrin

Split gather and scatter across machines:

High Communication in Distributed Updates

Y

Machine 1 Machine 2

Data from neighbors transmitted separately

across network

Page 63: Carlos  Guestrin

High Degree Vertices are Common

Use

rs

Movies

Netflix

“Social” People Popular Movies

θZwZwZwZw

θZwZwZwZw

θZwZwZwZw

θZwZwZwZw

Hyper Parameters

Doc

s

Words

LDA

Common Words

Obama

Page 64: Carlos  Guestrin

Factorized Update Functors

Delta Update Functors

Two Core Changes to Abstraction

Monolithic Updates

++

++ +

+

++

Gather Apply Scatter

Decomposed Updates

Monolithic Updates Composable Update “Messages”

f1 f2

(f1o f2)( )

Page 65: Carlos  Guestrin

Decomposable Update Functors

Locks are acquired only for region within a scope Relaxed Consistency

+ + … + Δ

Y YY

ParallelSum

User Defined:

Gather( ) ΔY

Δ1 + Δ2 Δ3

Y Scope

Gather

Y

YApply( , Δ) Y

Apply the accumulated value to center vertex

User Defined:

Apply

Y

Scatter( )

Update adjacent edgesand vertices.

User Defined:Y

Scatter

Page 66: Carlos  Guestrin

Factorized PageRankdouble gather(scope, edge) {

return edge.source().value().rank /

scope.num_out_edge(edge.source())}

double merge(acc1, acc2) { return acc1 + acc2 }

void apply(scope, accum) {old_value = scope.center_value().rankscope.center_value().rank = ALPHA + (1 - ALPHA) *

accumscope.center_value().residual =

abs(scope.center_value().rank – old_value)}

void scatter(scope, edge) {if (scope.center_vertex().residual > EPSILON)

reschedule_schedule(edge.target())}

Page 67: Carlos  Guestrin

Y

Split gather and scatter across machines:

Factorized Updates: Significant Decrease in Communication

( o )( )Y

YYF1 F2

YY

Small amount of data transmitted over network

Page 68: Carlos  Guestrin

Factorized ConsistencyNeighboring vertices maybe be updated simultaneously:

A B

Gather Gather

Page 69: Carlos  Guestrin

Apply

Factorized Consistency LockingGather on an edge cannot occur during apply:

A B

Gather

Vertex B gathers on other neighbors while A is performing Apply

Page 70: Carlos  Guestrin

Decomposable Loopy Belief Propagation

Gather: Accumulates product of in messages

Apply: Updates central belief

Scatter: Computes out messages and schedules adjacent vertices

Page 71: Carlos  Guestrin

Decomposable Alternating Least Squares (ALS)

y1

y2

y3

y4

w1

w2

x1

x2

x3Use

r Fac

tors

(W)

Movie Factors (X)

Use

rs

Movies

Netflix

Use

rs

≈x

Movies

Gather: Sum terms

wi

xj

Update Function:

Apply: matrix inversion & multiply

Page 72: Carlos  Guestrin

Comparison of Abstractions

Multicore PageRank (25M Vertices, 355M Edges)

0 1000 2000 3000 4000 5000 60001.00E-021.00E-011.00E+001.00E+011.00E+021.00E+031.00E+041.00E+051.00E+061.00E+071.00E+08

Runtime (s)

L1 E

rror

GraphLab1

FactorizedUpdates

Page 73: Carlos  Guestrin

Need for Vertex Level Asynchrony

Exploit commutative associative “sum”

Y

+ + + + + Y

Costly gather for a single change!

Page 74: Carlos  Guestrin

Commut-Assoc Vertex Level Asynchrony

Exploit commutative associative “sum”

+ + + + + Y

Y

Page 75: Carlos  Guestrin

Exploit commutative associative “sum”

+ + + + + + Δ Y

Y

Commut-Assoc Vertex Level Asynchrony

+ Δ

Page 76: Carlos  Guestrin

Delta Updates: Vertex Level Asynchrony

Exploit commutative associative “sum”

+ + + + + + Δ YOld (Cached) Sum

Y

Page 77: Carlos  Guestrin

Exploit commutative associative “sum”

YΔ Δ

Delta Updates: Vertex Level Asynchrony

+ + + + + + Δ YOld (Cached) Sum

Page 78: Carlos  Guestrin

Delta Update

void update(scope, delta) {scope.center_value() = scope.center_value() +

deltaif(abs(delta) > EPSILON) {

out_delta = delta * (1 – ALPHA) *1 /

scope.num_out_edge(edge.source())reschedule_out_neighbors(delta)

}}

double merge(delta, delta) { return delta + delta }

Program starts with: schedule_all(ALPHA)

Page 79: Carlos  Guestrin

Multicore Abstraction Comparison

Multicore PageRank (25M Vertices, 355M Edges)

0 2000 4000 6000 8000 10000 12000 140001.00E-021.00E-011.00E+001.00E+011.00E+021.00E+031.00E+041.00E+051.00E+061.00E+071.00E+08

Delta

Factorized

GraphLab 1

Simulated Pregel

Runtime (s)

L1 E

rror

Page 80: Carlos  Guestrin

Distributed Abstraction Comparison

Distributed PageRank (25M Vertices, 355M Edges)

2 3 4 5 6 7 80

50

100

150

200

250

300

350

400

# Machines (8 CPUs per Machine)

Runti

me

(s)

2 3 4 5 6 7 80

5

10

15

20

25

30

35

# Machines (8 CPUs per Machine)

Tota

l Com

mun

icati

on (G

B)

GraphLab1

GraphLab2 (Delta Updates)

GraphLab1

GraphLab2 (Delta Updates)

Page 81: Carlos  Guestrin

PageRankAltavista Webgraph 2002

1.4B vertices, 6.7B edges

Hadoop 9000 s800 cores

Prototype GraphLab2 431s512 cores

Known Inefficiencies.

2x gain possible

Page 82: Carlos  Guestrin

Decomposed Update Functions: Expose parallelism in high-degree vertices:

Delta Update Functions: Expose asynchrony in high-degree vertices

Summary of GraphLab2

++

++ +

+

++

Gather Apply Scatter

Y YΔ

Page 83: Carlos  Guestrin

Lessons LearnedMachine Learning:

Asynchronous often much faster than SynchronousDynamic computation often faster

However, can be difficult to define optimal thresholds:

Science to do!

Consistency can improve performance

Sometimes required for convergenceThough there are cases where relaxed consistency is sufficient

System:Distributed asynchronous systems are harder to build

But, no distributed barriers == better scalability and performance

Scaling up by an order of magnitude requires rethinking of design assumptions

E.g., distributed graph representation

High degree vertices & natural graphs can limit parallelism

Need further assumptions on update functions

Page 84: Carlos  Guestrin

Startups Using GraphLab

2000++ Unique Downloads Tracked(possibly many more from direct repository checkouts)

Companies experimenting (or downloading) with GraphLab

Academic projects exploring (or downloading) GraphLab

Yucheng

Page 85: Carlos  Guestrin

GraphLab Matrix Factorization Library

Used in ACM KDD Cup 2011 – track1 5th place out of more than 1000 participants [Wu et al.]

2 orders of magnitude faster than MahoutBlended 12 matrix factorization algorithms

Page 86: Carlos  Guestrin

SummaryAn abstraction tailored to Machine Learning

Targets Graph-Parallel Algorithms

Naturally expressesData/computational dependenciesDynamic iterative computation

Simplifies parallel algorithm designAutomatically ensures data consistencyAchieves state-of-the-art parallel performance on a variety of problems

Page 87: Carlos  Guestrin

Carnegie Mellon

Parallel GraphLab 1.1

Multicore Available TodayGraphLab2 (in the Cloud)

soon…

http://graphlab.org

Documentation… Code… Tutorials…

Page 88: Carlos  Guestrin

Next slide is an extra slide if people ask about running the distributed asynchronous delta (which is basically

asynchronous message passing), in a synchronous fashion. (i.e. use Deltas in Pregel implementation).

Page 89: Carlos  Guestrin

Distributed Abstraction ComparisonDistributed PageRank (25M Vertices, 355M Edges)

2 3 4 5 6 7 80

50

100

150

200

250

300

350

400

# Machines (8 CPUs per Machine)

Runti

me

(s)

2 3 4 5 6 7 80

5

10

15

20

25

30

35GL 1 (Chromatic)

GL 2 Delta (Synchronous)

GL 2 Delta (Asynchronous)

# Machines (8 CPUs per Machine)

Tota

l Com

mun

icati

on (G

B)

Page 90: Carlos  Guestrin

Update Count Distribution

0 10 20 30 40 50 60 700

2000000

4000000

6000000

8000000

10000000

12000000

14000000

Number of Updates

Num

-Ver

tices

Most vertices need to be updated infrequently