multinomial logistic regression with apache spark

of 31/31
Multinomial Logistic Regression with Apache Spark DB Tsai Machine Learning Engineer May 1 st , 2014

Post on 21-Apr-2017

8.625 views

Category:

Engineering

11 download

Embed Size (px)

TRANSCRIPT

  • Multinomial Logistic Regression with Apache Spark

    DB TsaiMachine Learning Engineer

    May 1st, 2014

  • Machine Learning with Big Data Hadoop MapReduce solutions

    MapReduce is scaling well for batch processing However, lots of machine learning algorithms

    are iterative by nature. There are lots of tricks people do, like training

    with subsamples of data, and then average themodels. Why big data with approximation?

    + =

  • Lightning-fast cluster computing

    Empower users to iterate through the data byutilizing the in-memory cache.

    Logistic regression runs up to 100x faster thanHadoop M/R in memory.

    We're able to train exact model without doing any approximation.

  • Binary Logistic Regressiond=

    ax1+bx 2+cx0a 2+b2

    where x0=1

    P ( y=1x , w )= exp (d )1+exp (d )

    = exp( x w )1+exp( x w )

    P ( y=0x , w)= 11+exp( x w)

    log P ( y=1x , w )P( y=0x , w)

    = x w

    w0=c

    a2+b2

    w1=a

    a2+b2

    w2=b

    a2+b2

    wherew0 is called as intercept

  • Training Binary Logistic Regression Maximum Likelihood estimation

    From a training data and labels

    We want to find that maximizes thelikelihood of data defined by

    We can take log of the equation, and minimize

    it instead. The Log-Likelihood becomes theloss function.

    X=( x1 , x 2 , x3 , ...)Y=( y1, y2, y3, ...)

    w

    L( w , x1, ... , xN)=P ( y1x1 , w)P ( y2x2 , w )... P ( yNxN , w )

    l ( w , x )=log P ( y1x1 , w )+log P ( y2x 2 , w )...+log P ( yNxN , w )

  • Optimization First Order Minimizer

    Require loss, gradient of loss function

    Gradient Decent is step size Limited-memory BFGS (L-BFGS) Orthant-Wise Limited-memory Quasi-Newton

    (OWLQN) Coordinate Descent (CD) Trust Region Newton Method (TRON)

    Second Order MinimizerRequire loss, gradient and hessian of loss function

    Newton-Raphson, quadratic convergence. Fast!

    Ref: Journal of Machine Learning Research 11 (2010) 3183-3234, Chih-Jen Lin et al.

    wn+1=w nG ,

    wn+1=w nH1G

  • Problem of Second Order Minimizer

    Scale horizontally (the numbers of trainingdata) by leveraging Spark to parallelize thisiterative optimization process.

    Don't scale vertically (the numbers of trainingfeatures). Dimension of Hessian is

    Recent applications from documentclassification and computational linguistics areof this type.

    dim (H )=[(k1)(n+1)]2 where k is numof class , nis numof features

  • L-BFGS It's a quasi-Newton method. Hessian matrix of second derivatives doesn't

    need to be evaluated directly. Hessian matrix is approximated using gradient

    evaluations. It converges a way faster than the default

    optimizer in Spark, Gradient Decent. We love open source! Alpine Data Labs

    contributed our L-BFGS to Spark, and it'salready merged in Spark-1157.

  • Training Binary Logistic Regressionl ( w , x ) =

    k=1

    N

    log P ( ykxk , w )

    = k=1

    N

    yk logP ( yk=1xk , w)+(1 yk ) logP ( yk=0xk , w)

    = k=1

    N

    yk logexp ( xk w)

    1+exp( xk w)+(1 yk ) log

    11+exp( xk w )

    = k=1

    N

    yk x k wlog(1+exp( xk w))

    Gradient : Gi (w , x)= l ( w , x )wi

    =k=1

    N

    y k xkiexp ( xk w)

    1+exp ( xk w )xki

    Hessian : H ij (w , x)= l (w , x )wi w j

    =k=1

    N exp( xk w)(1+exp( xk w ))

    2 xki x kj

  • Overfitting

    P ( y=1x , w)= exp(zd )1+exp(zd )

    =exp( x w)

    1+exp( x w)

  • Regularization The loss function becomes

    The loss function of regularizer doesn'tdepend on data. Common regularizers are

    L2 Regularization:

    L1 Regularization:

    L1 norm is not differentiable at zero!

    l total (w , x)=lmodel (w , x)+l reg( w)

    lreg (w )=i=1

    N

    wi2

    lreg (w )=i=1

    N

    wi

    G (w , x)total=G (w , x)model+G ( w)reg

    H ( w , x)total=H (w , x)model+H (w )reg

  • Mini School of Spark APIs map(func) : Return a new distributed dataset

    formed by passing each element of the sourcethrough a function func.

    reduce(func) : Aggregate the elements of thedataset using a function func (which takes twoarguments and returns one). The functionshould be commutative and associative sothat it can be computed correctly in parallel. No key thing here compared with HadoopM/R.

  • Example No More Word Count! Let's compute the mean of numbers.

    3.0

    2.0

    1.0

    5.0

    Executor 1

    Executor 2

    map

    map

    (1.0, 1)

    (5.0, 1)

    (3.0, 1)

    (2.0, 1)reduce

    (5.0, 2)

    reduce (11.0, 4)

    Shuffle

  • The previous example using map(func) willcreate new tuple object, which may causeGarbage Collection issue.

    Like the combiner in Hadoop Map Reduce, for each tuple emitted from map(func), there isno guarantee that they will be combinedlocally in the same executor by reduce(func). It may increase the traffic in shuffle phase.

    In Hadoop, we address this by in-mappercombiner or aggregating the result in globalvariables which have scope to entire partition.The same approach can be used in Spark.

  • Mini School of Spark APIs

    mapPartitions(func) : Similar to map, but runsseparately on each partition (block) of theRDD, so func must be of type Iterator[T] =>Iterator[U] when running on an RDD of type T.

    This API allows us to have global variables onentire partition to aggregate the result locallyand efficiently.

  • Better Mean of Numbers

    Implementation

    3.0

    2.0

    Executor 1

    reduce (11.0, 4)

    Shuffle

    var counts = 0var sum = 0.0

    loop

    var counts = 2var sum = 5.0

    3.0

    2.0

    (5.0, 2)

    1.0

    5.0

    Executor 2

    var counts = 0var sum = 0.0

    loop

    var counts = 2var sum = 6.0

    1.0

    5.0

    (6.0, 2)

    mapPartitions

  • More Idiomatic Scala Implementation aggregate(zeroValue: U)(seqOp: (U, T) => U,

    combOp: (U, U) => U): U zeroValue is a neutral "zero value"with type U for initialization. This isanalogous to

    in previous example.

    var counts = 0var sum = 0.0

    seqOp is function taking(U, T) => U, where U is aggregatorinitialized as zeroValue. T is eachline of values in RDD. This isanalogous to mapPartition inprevious example.

    combOp is function taking(U, U) => U. This is essentiallycombing the results betweendifferent executors. Thefunctionality is the same asreduce(func) in previous example.

  • Approach of Parallelization in SparkSpark Driver JVM

    Tim

    e

    Spark Executor 1 JVM

    Spark Executor 2 JVM

    1) Find available resources in cluster, and launch executor JVMs. Also initialize the weights. 2) Ask executors to load the data

    into executors' JVMs

    2) Trying to load the data into memory. If the data is bigger than memory, it will be partial cached. (The data locality from source will be taken care by Spark)

    3) Ask executors to compute loss, and gradient of each training sample (each row) given the current weights. Get the aggregated results after the Reduce Phase in executors.

    5) If the regularization is enabled, compute the loss and gradient of regularizer in driver since it doesn't depend on training data but only depends on weights. Add them into the results from executors.

    3) Map Phase: Compute the loss and gradient of each row of training data locally given the weights obtained from the driver. Can either emit each result or sum them up in local aggregators.

    4) Reduce Phase: Sum up the losses and gradients emitted from the Map Phase

    Taking a rest!

    6) Plug the loss and gradient from model and regularizer into optimizer to get the new weights. If the differences of weights and losses are larger than criteria, GO BACK TO 3)

    Taking a rest!

    7) Finish the model training! Taking a rest!

  • Step 3) and 4) This is the implementation of step 3) and 4) in

    MLlib before Spark 1.0

    gradient can have implementations ofLogistic Regression, Linear Regression, SVM,or any customized cost function.

    Each training data will create new grad object after gradient.compute.

  • Step 3) and 4) with mapPartitions

  • Step 3) and 4) with aggregate

    This is the implementation of step 3) and 4) inMLlib in coming Spark 1.0

    No unnecessary object creation! It's helpfulwhen we're dealing with large features trainingdata. GC will not kick in the executor JVMs.

  • Extension to Multinomial Logistic Regression

    In Binary Logistic Regression

    For K classes multinomial problem where labelsranged from [0, K-1], we can generalize it via

    The model, weights becomes(K-1)(N+1) matrix, where N is number of features.

    logP ( y=1x , w)P ( y=0x , w)

    = x w1

    logP ( y=2x , w)P ( y=0x , w)= x w2

    ...

    logP ( y=K1x , w)P ( y=0x , w)

    = x wK1

    log P ( y=1x , w)P ( y=0x , w)

    = x w

    w=(w1, w2, ... , wK1)T

    P ( y=0x , w )= 1

    1+i=1

    K1

    exp( x w i )

    P ( y=1x , w)=exp( x w2)

    1+i=1

    K1

    exp( x w i)

    ...

    P ( y=K1x , w )=exp( x wK1)

    1+i=1

    K1

    exp( x w i )

  • Training Multinomial Logistic Regression l ( w , x ) =

    k=1

    N

    log P ( ykxk , w )

    = k=1

    N

    ( y k) log P ( y=0x k , w )+(1( yk )) logP ( ykxk , w)

    = k=1

    N

    ( y k) log1

    1+i=1

    K1

    exp( x wi)+(1( yk )) log

    exp( x w yk)

    1+i=1

    K1

    exp( x wi)

    = k=1

    N

    (1( yk )) x w y klog (1+i=1

    K1

    exp( x wi))

    Gradient : Gij (w , x)= l ( w , x)wij

    =k=1

    N

    (1( yk )) xkj i , y kexp ( xk w)

    1+exp( xk w )xkj

    ( yk )=1 if y k=0( yk )=0 if yk0

    Define:

    Note that the first index i is for classes, and the second index j is for features.

    Hessian: H ij , lm( w , x )= l (w , x)wij wlm

  • 0 5 10 15 20 25 30 350.3

    0.35

    0.4

    0.45

    0.5

    0.55

    0.6

    0.65

    0.7

    Logistic Regression with a9a Dataset (11M rows, 123 features, 11% non-zero elements)16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster

    L-BFGS Dense FeaturesL-BFGS Sparse FeaturesGD Sparse FeaturesGD Dense Features

    Seconds

    Log-

    Like

    lihoo

    d / N

    umbe

    r of S

    ampl

    esa9a Dataset Benchmark

  • a9a Dataset Benchmark

    -1 1 3 5 7 9 11 13 150.3

    0.35

    0.4

    0.45

    0.5

    0.55

    0.6

    0.65

    0.7

    Logistic Regression with a9a Dataset (11M rows, 123 features, 11% non-zero elements)16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster

    L-BFGSGD

    Iterations

    Log-

    Like

    lihoo

    d / N

    umbe

    r of S

    ampl

    es

  • 0 5 10 15 20 25 30

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    Logistic Regression with rcv1 Dataset (6.8M rows, 677,399 features, 0.15% non-zero elements)16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster

    LBFGS Sparse VectorGD Sparse Vector

    Second

    Log-

    Like

    lihoo

    d / N

    umbe

    r of S

    ampl

    esrcv1 Dataset Benchmark

  • news20 Dataset Benchmark

    0 10 20 30 40 50 60 70 800

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    Logistic Regression with news20 Dataset (0.14M rows, 1,355,191 features, 0.034% non-zero elements)16 executors in INTEL Xeon E3-1230v3 32GB Memory * 5 nodes Hadoop 2.0.5 alpha cluster

    LBFGS Sparse VectorGD Sparse Vector

    Second

    Log-

    Like

    lihoo

    d / N

    umbe

    r of S

    ampl

    es

  • Alpine Demo

  • Alpine Demo

  • Alpine Demo

  • Conclusion Alpine supports state of the art Cloudera CDH5 Spark runs programs 100x faster than Hadoop Spark turns iterative big data machine learning problems

    into single machine problems MLlib provides lots of state of art machine learning

    implementations, e.g. K-means, linear regression,logistic regression, ALS, collaborative filtering, andNaive Bayes, etc.

    Spark provides ease of use APIs in Java, Scala, orPython, and interactive Scala and Python shell

    Spark is Apache project, and it's the most active big dataopen source project nowadays.

    Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Slide 26Slide 27Slide 28Slide 29Slide 30Slide 31