oversketch: approximate matrix multiplication for the...
TRANSCRIPT
OverSketch: Approximate Matrix Multiplication for the Cloud
Vipul Gupta?, Shusen Wang†, Thomas Courtade? and Kannan Ramchandran??Department of EECS, UC Berkeley
†Department of CS, Stevens Institute of TechnologyEmail: {vipul_gupta, courtade, kannanr}@eecs.berkeley.edu, [email protected]
Abstract—We propose OverSketch, an approximate algo-rithm for distributed matrix multiplication in serverless com-puting. OverSketch leverages ideas from matrix sketching andhigh-performance computing to enable cost-efficient multipli-cation that is resilient to faults and straggling nodes pervasivein low-cost serverless architectures. We establish statisticalguarantees on the accuracy of OverSketch and empiricallyvalidate our results by solving a large-scale linear programusing interior-point methods and demonstrate a 34% reductionin compute time on AWS Lambda.
Keywords-serverless computing, straggler mitigation,sketched matrix multiplication
I. INTRODUCTION
Matrix multiplication is a frequent computational bottle-neck in fields like scientific computing, machine learning,graph processing, etc. In many applications, such matricesare very large, with dimensions easily scaling up to millions.Consequently, the last three decades have witnessed the devel-opment of many algorithms for parallel matrix multiplicationfor High Performance Computing (HPC). During the sameperiod, technological trends like Moore’s law made arithmeticoperations faster and, as a result, the bottleneck for parallelcomputation shifted from computation to communication.Today, the cost of moving data between nodes exceeds thecost of arithmetic operations by orders of magnitude. Thisgap is increasing exponentially with time and has led to thepopularity of communication-avoiding algorithms for parallelcomputation [1], [2].
In the last few years, there has been a paradigm shiftfrom HPC towards distributed computing on the cloud due toextensive and inexpensive commercial offerings. In spite ofdevelopments in recent years, server-based cloud computingis inaccessible to a large number of users due to complexcluster management and a myriad of configuration tools.Serverless computing1 has recently begun to fill this voidby abstracting away the need for maintaining servers andthus removing the need for complicated cluster managementwhile providing greater elasticity and easy scalability [3]–[5].Some examples are Amazon Web Services (AWS) basedLambda and Google Cloud Functions. Large-scale matrix
This work was supported by NSF Grants CCF-1703678, CCF-1704967and CCF-0939370 (Center for Science of Information)
1The term ‘serverless’ is a misnomer, servers are still used for computationbut their maintenance and provisioning is hidden from the user.
Stragglers
Figure 1: Job times for 3000 AWS Lambda nodes where the median jobtime is around 40 seconds, and around 5% of the nodes take 100 seconds,and two nodes take as much as 375 seconds to complete the same job.
multiplication, being embarrassingly parallel and frequentlyencountered, is a natural fit for serverless computing.
Existing distributed algorithms for HPC cannot, in general,be extended to serverless computing due to the followingcrucial differences between the two architectures:• Workers in the serverless setting, unlike cluster nodes, do
not communicate amongst themselves. They read/write datadirectly from/to a single data storage entity (for example,cloud storage like AWS S3) and the user is only allowedto submit prespecified jobs and does not have any controlover the management of workers [4], [5].
• Distributed computation in HPC/server-based systems isgenerally limited by the number of workers at disposal.However, in serverless systems, the number of inexpensiveworkers can easily be scaled into the thousands, but theselow-commodity nodes are generally limited by the amountof memory and lifespan available.
• Unlike HPC, nodes in the cloud-based systems sufferdegradation due to system noise [6], [7]. This causesvariability in job times, resulting in a subset of slowernodes called stragglers. Time statistics for worker job timesare plotted in Figure 1 for AWS Lambda. Notably, thereare a few workers (∼ 5%) that take much longer than themedian job time, thus decreasing the overall computationalefficiency of the system. Distributed algorithms robust tosuch unreliable nodes are desirable in cloud computing.
A. Main Contributions
This paper bridges the gap between communication-efficient algorithms for distributed computation and existing
2018 IEEE International Conference on Big Data (Big Data)
978-1-5386-5035-6/18/$31.00 ©2018 IEEE 298
methods for straggler-resiliency. To this end, we first analyzethe monetary cost of distributed matrix multiplication forserverless computing for two different schemes of partitioningand distributing the data. Specifically, we show that row-column partitioning of input matrices requires asymptoticallymore communication than blocked partitioning for distributedmatrix multiplication, similar to the optimal communication-avoiding algorithms in the HPC literature.
In applications like machine learning, where the data itselfis noisy, solution accuracy is often traded for computationalefficiency. Motivated by this, we propose OverSketch, asketching scheme to perform blocked approximate matrixmultiplication and prove statistical guarantees on the accuracyof the result. OverSketch has threefold advantages:1) Reduced computational complexity by significantly de-
creasing the dimension of input matrices using sketching,2) Resiliency against stragglers and faults in serverless
computing by over-provisioning the sketch dimension,3) Communication efficiency for distributed multiplication
due to the blocked partition of input matrices.Sketching for OverSketch requires linear time that is embar-rassingly parallel. Through experiments on AWS Lambda,we show that small redundancy (≈ 5%) is enough to tacklestragglers using OverSketch. Furthermore, we use OverSketchto calculate the Hessian distributedly while solving a largelinear program using interior point methods and demonstratea 34% reduction in total compute time on AWS Lambda.
B. Related Work
Recently, approaches based on coding theory have beendeveloped which cleverly introduce redundancy into thecomputation to deal with stragglers [8]–[13]. Many ofthese proposed schemes have been dedicated to distributedmatrix multiplication [9]–[12]. In [9], the authors develop acoding scheme for matrix multiplication that uses MaximumDistance Separable (MDS) codes to code A in a column-wisefashion and B in a row-wise fashion, so that the resultantis a product-code of C, where C = A×B. An illustrationis shown in Figure 2. Authors in [10] generalize the resultsin [9] to a d-dimensional product code with only one parityin each dimension. In [11], the authors develop polynomialcodes for matrix multiplication, which is an improvementover [9] in terms of recovery threshold, that is, minimumnumber of workers required to recover the product C.
The commonality in these and other similar results isthat they divide the input matrices into row and columnblocks, where each worker multiplies a row block (or somecombination of row blocks) of A and a column block (orsome combination of column blocks) of B. These methodsprovide straggler resiliency but are not cost-efficient as theyrequire asymptotically more communication than blockedpartitioning of data, as discussed in detail in the next section.Another disadvantage of such coding-based methods is thatthere are separate encoding and decoding phases that require
Figure 2: Matrix A is divided into 2 row chunks A1 and A2, whileB is divided into two column chunks B1 and B2. During the encodingprocess, redundant chunks A1+A2 and B1+B2 are created. To computeC, 9 workers store each possible combination of a chunk of A and Band multiply them. During the decoding phase, the master can recover theaffected data (C11, C12 and C22 in this case) using the redundant chunks.
x =A
m
a
n aA(1,:)
A(2,:)
A(3,:)
A(4,:)
A(5,:)
A(6,:)
l
B(:,1) B(:,2) B(:,3) B(:,4)
A B
C(1,1)
C
C(2,1)
C(3,1)
C(4,1)
C(5,1)
C(1,2) C(1,3) C(1,4)
C(2,2)
C(3,2)
C(4,2)
C(5,2)
C(2,3)
C(3,3)
C(4,3)
C(5,3)
mC(2,4)
C(3,4)
C(4,4)
C(5,4)
l
n
C(6,1) C(6,2) C(6,3) C(6,4)
(a) Distributed naive matrix multiplication, where each worker multiplies arow-block of A of size a× n and a column block of B of size n× a to getan a× a block of C.
x =
A B C
bm
n
b
l l
m
n
A(1,1) A(1,2) A(1,3)
A(2,1) A(2,2) A(2,3)
A(1,1) A(1,2) A(1,3)
B(1,1) B(1,2)
B(2,1) B(2,2)
B(3,1) B(3,2)
C(1,1) C(1,2)
C(2,1) C(2,2)
C(3,1) C(3,2)
(b) Distributed blocked matrix multiplication, where each worker multiplies asub-block of A of size b× b and a sub-block of B of size b× b.
Figure 3: An illustration of two algorithms for distributed matrixmultiplication.
additional communication and potentially large computationalburden at the master node, which may make the algorithminfeasible in some distributed computing environments.
II. PRELIMINARIES
There are two common schemes for distributed multi-plication of two matrices A ∈ Rm×n and B ∈ Rn×l, asillustrated in Figures 3a and 3b. We refer to these schemesas naive and blocked matrix multiplication, respectively. Fordetailed steps of naive and blocked matrix multiplication inthe serverless setting, we refer the readers to algorithms 1 and2 in [14], respectively. During naive matrix multiplication,each worker receives and multiplies an a × n row-blockof A and n × a column-block of B to compute an a × ablock of C. Blocked matrix multiplication consists of twophases. During the computation phase, each worker getstwo b× b blocks, one each from A and B, which are thenmultiplied by the workers. In the reduction phase, to computea b× b block of C, one worker gathers results of all the n/bworkers from the the cloud storage corresponding to onerow-block of A and one column-block of B and adds them.For example, in Figure 3b, to get C(1, 1), results from 3
299
workers who compute A(1, 1)×B(1, 1), A(1, 2)×B(2, 1)and A(1, 3)×B(3, 1) are added.
It is accepted in High Performance Computing (HPC) thatblocked partitioning of input matrices takes less time thannaive matrix multiplication [1], [2], [15]. For example, in [2],the authors propose 2.5D matrix multiplication, an optimalcommunication avoiding algorithm for matrix multiplicationin HPC/server-based computing, that divides input matricesinto blocks and stores redundant copies of them across proces-sors to reduce bandwidth and latency costs. However, perhapsdue to lack of a proper analysis for cloud-based distributedcomputing, existing algorithms for straggler mitigation inthe cloud do naive matrix multiplication [9]–[11]. Next,we develop a new cost model for the serverless computingarchitecture and aim at bridging the gap between cost analysisand straggler mitigation for distributed computation in theserverless setting.
III. COST ANALYSIS: NAIVE AND BLOCKEDMULTIPLICATION
There are communication and computation costs associatedwith any distributed algorithm. Communication costs them-selves are of two types: latency and bandwidth. For example,sending n words requires packing them into contiguousmemory and transmitting them as a message. The latencycost α is the fixed overhead time spent in packing andtransmitting a message over the network. Thus, to send Qmessages, the total latency cost is αQ. Similarly, to transmitK words, a bandwidth cost proportional to K, given byβK, is associated. Letting γ denote the time to perform onefloating point operation (FLOP), the total computing costis γF , where F is the total number of FLOPs at the node.Hence, the total time pertaining to one node that sends Mmessages, K words and performs F FLOPs is
Tworker = αQ+ βK + γF,
where α � β � γ. The (α, β, γ) model defined abovehas been well-studied and is used extensively in the HPCliterature [1], [2]. It is ideally suited for serverless computing,where network topology does not affect the latency costs aseach worker reads/writes directly from/to the cloud storageand no multicast gains are possible.
However, our analysis for costs incurred during distributedmatrix multiplication differs from previous works in threeprinciple ways. 1) Workers in serverless architecture cannotcommunicate amongst themselves, and hence, our algorithmfor blocked multiplication is very different from optimalcommunication avoiding algorithm for HPC that involvesmessage passing between workers [2]. 2) The number ofworkers in HPC analyses is generally fixed, whereas thenumber of workers in serverless setting is quite flexible,easily scaling into the thousands, and the limiting factor ismemory/bandwidth available at each node. 3) Computationon the inexpensive cloud is more motivated by savings in
Table I: Costs comparison for naive and blocked matrixmultiplication in the serverless setting, where δ < 2.
Cost type Naive multiply Blocked Multiply Ratio: naive/blocked
Latency O(mln2(1−δ)) O(mln1−3δ/2) O(n1−δ/2)
Bandwidth O(mln2−δ) O(mln1−δ/2) O(n1−δ/2)
Computation O(mln) O(mln) 1
expenditure than the time required to run the algorithm. Wedefine our cost function below.
If there are W workers, each doing an equal amountof work, the total amount of money spent in running thedistributed algorithm on the cloud is proportional to
Ctotal =W × Tworker =W (αQ+ βK + γF ). (1)
Eq. (1) does not take into account the straggling costs as theyincrease the total cost by a constant factor (by re-runningthe jobs that are straggling) and hence does not affect ourasymptotic analysis.
Inexpensive nodes in serverless computing are generallyconstrained by the amount of memory or communicationbandwidth available. For example, AWS Lambda nodes havea maximum allocated memory of 3008 MB2, a fractionof the memory available in today’s smartphones. Let thememory available at each node be limited to M words. Thatis, the communication bandwidth available at each workeris limited to M words, and this is the main bottleneck ofthe distributed system. We would like to multiply two largematrices A ∈ Rm×n and B ∈ Rn×l in parallel, and letM = O(nδ). For all practical cases in consideration, δ < 2.
Proposition 3.1: For the cost model defined in Eq. (1),communication (i.e., latency and bandwidth) costs for blockedmultiplication outperform naive multiplication by a factor ofO(n1−δ/2), where the individual costs are listed in Table I.
We refer the readers to Appendix A of [14] for proof. Therightmost column in Table I lists the ratio of communicationcosts for naive and blocked matrix multiplication. We notethat the latter significantly outperforms the former, withcommunication costs being asymptotically worse for naivemultiplication. An intuition behind why this happens is thateach worker in distributed blocked multiplication does morework than in distributed naive multiplication for the sameamount of received data. For example, to multiply two squarematrices of dimension n, where memory at each workerlimited by M = 2n, a = 1 for naive multiplication and b =√n for blocked multiplication. We note that the amount of
work done by each worker in naive and blocked multiplicationis O(n) and O(n3/2), respectively. Since the total amountof work is constant and equal to O(n3), blocked matrixmultiplication ends up communicating less during the overallexecution of the algorithm as it requires fewer workers. Note
2AWS Lambda limits are available at (may change over time)https://docs.aws.amazon.com/lambda/latest/dg/limits.html
300
0.5 1 1.5 2 2.5 3Matrix dimension (n) #104
0
1
2
3
4
5
6
Aver
age
cost
in d
olla
rs Blocked MultiplyNaive multiply
Figure 4: Comparison of AWS Lambda costs for multiplying two n× nmatrices, where each worker is limited by 3008 MB of memory and priceper running worker per 100 milliseconds is $0.000004897.
that naive multiplication takes less time to complete as eachworker does asymptotically less work, however, the numberof workers required is asymptotically more, which is not anefficient utilization of resources and increases the expendituresignificantly.
Figure 4 supports the above analysis where we plot thecost in dollars of multiplying two square matrices in AWSLambda, where each node’s memory is limited by 3008 MBand price per worker per 100 millisecond is $0.000004897.However, as discussed earlier, existing schemes for straggler-resiliency in distributed matrix multiplication consider naivemultiplication which is impractical from a user’s point ofview. In the next section, we propose OverSketch, a schemeto mitigate the detrimental effects of stragglers for blockedmatrix multiplication.
IV. OVERSKETCH: STRAGGLER-RESILIENT BLOCKEDMATRIX MULTIPLICATION USING SKETCHING
Many of the recent advances in algorithms for numericallinear algebra have come from the technique of linear sketch-ing, in which a given matrix is compressed by multiplying itwith a random matrix of appropriate dimension. The resultingproduct can then act as a proxy for the original matrixin expensive computations such as matrix multiplication,least-squares regression, low-rank approximation, etc. [16],[17]. For example, computing the product of A ∈ Rm×nand B ∈ Rn×l takes O(mnl) time. However, if we useS ∈ Rn×d to compute the sketches, say A = AS ∈ Rm×dand B = STB ∈ Rd×l, where d � n is the sketchdimension, we can reduce the computation time to O(mdl)by computing an approximate product ASSTB. This is veryuseful in applications like machine learning, where the dataitself is noisy, and computing the exact result is not needed.
Key idea behind OverSketch: Sketching acceleratescomputation by eliminating redundancy in the matrix struc-ture through dimension reduction. However, the coding-based approaches described in Section I-B have shownthat redundancy can be good for combating stragglers ifjudiciously introduced into the computation. With thesecompeting points of view in mind, our algorithm OverSketchworks by "oversketching" the matrices to be multiplied byreducing dimensionality not to the minimum required for
Ã(2, : )
B#(: , 1)
Ã(1, : )
B#(: , 2)
x ≈'(1,1) '(1,2)
'(2,1) '(2,2)
Ã
B#
C
b
d+b
m
d+bb
l
l
m
d
x'(1,1) ≈Ã(1, : )
B#(: , 1)
W1 W2 W3 W4
W1
W2
W3
W4
x'(1,2) ≈Ã(1, : )
B#(: , 2)
W5 W6 W7 W8
W5
W6
W7
W8
x'(2,1) ≈Ã(2, : )
B#(: , 1)
W9 W10 W11 W12
W9
W10
W11
W12
x'(2,2) ≈Ã(2, : )
B#(: , 2)
W13W14 W15 W16
W13
W14
W15
W16
Figure 5: An illustration of multiplication of m× z matrix A and z × lmatrix B, where z = d + b assures resiliency against one straggler perblock of C, and d is chosen by the user to guarantee a desired accuracy.Here, m = l = 2b, d = 3b, where b is the block-size for blocked matrixmultiplication. This scheme ensures one worker can be ignored whilecalculating each block of C.
sketching accuracy, but rather to a slightly higher amountwhich simultaneously ensures both the accuracy guaranteesand speedups of sketching and the straggler resilienceafforded by the redundancy which was not eliminated inthe sketch. OverSketch further reduces asymptotic costs byadopting the idea of block partitioning from HPC, suitablyadapted for a serverless architecture.
Next, we propose a sketching scheme for OverSketch anddescribe the process of straggler mitigation in detail.
A. OverSketch: The Algorithm
During blocked matrix multiplication, the (i, j)-th block ofC is computed by assimilating results from d/b workers whocompute the product A(i, k)× B(k, j), for k = 1, · · · , d/b.Thus, the computation C(i, j) can be viewed as the product ofthe row sub-block A(i, :) ∈ Rb×d of A and the column sub-block B(:, j) ∈ Rd×b of B. An illustration is shown in Figure5. Assuming d is large enough to guarantee the requiredaccuracy in C, we increase the sketch dimension from d toz = d+ eb, where e is the worst case number of stragglersin N = d/b workers. For the example in Figure 5, e = 1. Toget a better insight on e, we observe in our simulations forcloud systems like AWS lambda and EC2 that the number ofstragglers is < 5% for most runs. Thus, if N = d/b = 40, i.e.40 workers compute one block of C, then e ≈ 2 is sufficientto get similar accuracy for matrix multiplication. Next, wedescribe how to compute the sketched matrices A and B.
Many sketching techniques have been proposed recentlyfor approximate matrix computations. For example, to sketcha m × n matrix A with sketch dimension d, Gaussianprojection takes O(mnd) time, Subsampled RandomizedHadamard Transform (SRHT) takes O(mn log n) time, countsketch takes O(nnz(A)) time, where nnz(·) is the numberof non-zero entries [17], [18]. Count sketch is one of themost popular sketching techniques as it requires linear timeto compute the matrix sketch with similar approximationguarantees.
301
Algorithm 1: OverSketch: Distributed blocked matrixmultiplication for the Cloud
Input : Matrices A ∈ Rm×n and B ∈ Rn×l, sketchdimension z, straggler tolerance e
Result: C ≈ A×B1 Sketching: Obtain A = AS and B = STB using S ∈ Rn×z
from (3) in a distributed fashion2 Block partitioning: Divide A into m/b× z/b matrix and B
into z/b× l/b matrix of b× b blocks where b is theblock-size
3 Computation phase: Multiply A and B using blockedpartitioning. This step invokes mlz/b3 workers, where z/bworkers are used per block of C
4 Termination: Stop computation when any d/b workers returntheir results for each of the ml/b2 blocks of C, whered = z − eb
5 Reduction phase: Invoke ml/b2 workers for reduction ateach block of C on computed results
To compute the count sketch of A ∈ Rm×n of sketchdimension b, each column in A is multiplied by −1 withprobability 0.5 and then mapped to an integer sampleduniformly from {1, 2, · · · , b}. Then, to compute the sketchAc = ASc, columns with the same mapped value aresummed. An example of count sketch matrix with n = 9and b = 3 is
STc =
0 0 0 1 −1 0 −1 0 01 −1 0 0 0 1 0 0 00 0 1 0 0 0 0 −1 −1
. (2)
The sparse structure of Sc ensures that the computation ofsketch takes O(nnz(A)) time. However, a drawback of thedesirable sparse structure of count sketch is that it cannot bedirectly employed for straggler mitigation in blocked matrixmultiplication as it would imply complete loss of informationfrom a subset of columns of A. We refer the readers to [14]for a detailed example.
To facilitate straggler mitigation for blocked matrix mul-tiplication, we propose a new sketch matrix S, inspired bycount sketch, and define it as
S =1√N
(S1,S2, · · · ,SN+e), (3)
where N = d/b, e is the expected number of stragglers perblock of C and Si ∈ Rn×b, for i = 1, 2, · · · , (N + e), is acount sketch matrix with dimension b. Thus, the total sketch-dimension for the sketch matrix in (3) is z = (N + e)b =d+eb. Computation of this sketch takes O(nnz(A)(N +e))time and can be implemented in a distributed fashion trivially,where (N + e) is the number of workers per block of Cand is a constant less than 30 for most practical cases. Wedescribe OverSketch in detail in Algorithm 1. We provestatistical guarantees on the accuracy of our sketching basedmatrix multiplication algorithm next.
0 2 4 6 8 10e
0
50
100
150
200
250
300
350
400
time(sec)
(a) Time statistics for OverSketchon AWS Lambda for the stragglerprofile in Figure 1
0 2 4 6 8 10e
0.6
0.8
1
1.2
Erro
r in
Frob
eniu
s no
rm (%
)
(b) Frobenius norm error forsketched matrix product
Figure 6: Time and approximation error for OverSketch with 3000workers when e, the number of workers ignored per block of C, isvaried from 0 to 10.
B. OverSketch: Approximation guarantees
Definition 4.1: We say that an approximate matrix mul-tiplication of two matrices A and B using sketch S, givenby ASSTB, is (ε, θ) accurate if, with probability at least(1− θ), it satisfies
||AB−ASSTB||2F ≤ ε||A||2F ||B||2F .
Now, for blocked matrix multiplication using OverSketchand as illustrated in Figure 5, the following holds
Theorem 4.1: Computing (AS) × (STB) using sketchS ∈ Rn×z in (3) and d = 2
εθ , while ignoring e stragglersamong any z
b workers, is (ε, θ) accurate.We refer the readers to Appendix B in [14] for proof.
V. EXPERIMENTAL RESULTS
A. Blocked Matrix Multiplication on AWS Lambda
We implement the straggler-resilient blocked matrix mul-tiplication described above in the serverless computingplatform Pywren [4], [5]3, on the AWS Lambda cloudsystem to compute an approximate C = AS × STB withb = 2048,m = l = 10b, n = 60b and S as defined in (3)with sketch dimension z = 30b. Throughout this experiment,we take A and B to be constant matrices where the entriesof A are given by A(x, y) = x + y for all x ∈ [1,m] andy ∈ [1, n] and B = AT . Thus, to compute (i, j)-th b × bblock of C, 30 nodes compute the product of A(i, :) andB(:, j), where A = AS and B = STB. While collectingresults, we ignore e workers for each block of C, where eis varied from 0 to 10.
The time statistics are plotted in Figure 6a. The corre-sponding worker job times are shown in Figure 1, where themedian job time is around 42 seconds, and some stragglersreturn their results around 100 seconds and some otherstake up to 375 seconds. We note that the compute time formatrix multiplication reduces by a factor of 9 if we ignoreat most 4 workers per 30 workers that compute a block of
3A working implementation of OverSketch is available athttps://github.com/vvipgupta/OverSketch
302
0 20 40 60 80 100Number of iterations
0
500
1000
1500
2000
2500To
tal t
ime
(sec
onds
)
e = 0e = 1e = 2e = 3e = 4Invoke Times
(a) Plot of total time (i.e., invocation time pluscomputation time) versus number of iterations.
0 20 40 60 80 100Number of iterations
0
200
400
600
800
1000
1200
1400
Com
pute
tim
e (s
econ
ds) e = 0
e = 1e = 2e = 3e = 4
(b) Plot of computation time versus number ofiterations. When e = 1, algorithm takes 7 minutesless compared to when e = 0.
0 20 40 60 80 100Number of iterations
0
10
20
30
40
50
Erro
r (%
)
e = 0e = 1e = 2e = 3e = 4
(c) Plot of percentage error versus iterations.Ignoring one worker per block of C has negligibleaffect on the convergence.
Figure 7: Time statistics and optimality gap on AWS Lambda while solving the LP in (4) using interior point methods, where e is thenumber of workers ignored per block of C.
C. In figure 6b, for same A and B, we plot average error inmatrix multiplication by generating ten instances of sketchesand averaging the error in Frobenius norm, ||AB−ASSTB||F
||AB||F ,across instances. We see that the average error is only 0.8%when 4 workers are ignored.
B. Solving Optimization Problems with Sketched Matrixmultiplication
Matrix multiplication is the bottleneck of many optimiza-tion problems. Thus, sketching has been applied to solveseveral fairly common optimization problems using second-order methods, like linear programs, maximum likelihoodestimation, generalized linear models like least squares andlogistic regression, semi-definite programs, support vectormachines, kernel ridge regression, etc., with essentially sameconvergence guarantees as exact matrix multiplication (see[19], [20], for example). As an instance, we solve thefollowing linear program (LP) using interior point methodson AWS Lambda
minimizeAx≤b
cTx, (4)
where x ∈ Rm×1, c ∈ Rm×1,b ∈ Rn×1 and A ∈ Rn×m isthe constraint matrix with n > m. To solve (4) using thelogarithmic barrier method, we solve the following sequenceof problems using Newton’s method
minx∈Rm
f(x) = minx∈Rm
(τcTx−
n∑i=1
log(bi − aix)
), (5)
where ai is the i-th row of A, τ is increased geometricallyas τ = 2τ after every 10 iterations and the total number ofiterations is 100. The update in the t-th iteration is given byxt+1 = xt−η(∇2f(xt))
−1∇f(xt), where xt is the estimateof the solution in the t-th iteration and η is the appropriatestep-size. The gradient and Hessian for the objective in (5)
are given by
∇f(x) = τc+
n∑i=1
aTibi − aTi x
and (6)
∇2f(x) = AT diag1
(bi− aix)2A, (7)
respectively. The square root of the Hessian is given by∇2f(x)1/2 = diag 1
|bi−aix|A. The computation of Hessianrequires O(nm2) time and is the bottleneck in each iter-ation. Thus, we use our distributed and sketching-basedblocked matrix multiplication scheme to mitigate stragglerswhile evaluating the Hessian approximately, i.e. ∇2f(x) ≈(S∇2f(x)1/2)T × (S∇2f(x)1/2), on AWS Lambda, whereS is defined in (3).
We take the block size, b, to be 1000, the dimensions ofA to be n = 40b and m = 5b and the sketch dimension tobe z = 20b. We use a total of 500 workers in each iteration.Thus, to compute each b × b block of C, 20 workers areassigned to compute matrix multiplication on two b × bblocks. We depict the time and error versus iterations infigure 7. We plot our results for different values of e, wheree is the number of workers ignored per block of C. In oursimulations, each iteration includes around 9 seconds ofinvocation time to launch AWS Lambda workers and assigntasks. In figure 7a, we plot the total time that includes theinvocation time and computation time versus iterations. In7b, we exclude the invocation time and plot just the computetime in each iteration iteration and observe 34% savings insolving (4) when e = 1, whereas the effect on error withrespect to the optimal solution is insignificant (as shown infigure 7c).
C. Comparison with Existing Straggler Mitigation Schemes
In this section, we compare OverSketch with an existingcoding-theory based straggler mitigation scheme described in[9]. An illustration for [9] is shown in Figure 2. We multiplytwo square matrices A and B of dimension n on AWS
303
2 3 4 5 6Matrix dimension (n) #104
0
0.5
1
1.5
2
Num
ber o
f Wor
kers
Em
ploy
ed
#104
OverSketchCoded Matrix Multiplication
(a) Number of workers required forOverSketch and [9]
2 3 4 5 6Matrix dimension (n) #104
0
5
10
15
Aver
age
cost
(USD
)
OverSketchCoded Matrix Multiplication
(b) Cost for distributed matrix mul-tiplication for OverSketch and [9]
Figure 8: Comparison of OverSketch with coded theory basedscheme in [9] on AWS Lambda. OverSketch requires asymptoticallyless workers which translates to significant savings in cost.
Lambda using the two schemes, where A(x, y) = x+ y andB(x, y) = x×y for all x, y ∈ [1, n]. We limit the bandwidthof each worker by 400 MB (i.e. around 48 million entries,where each entry takes 8 bytes) for a fair comparison. Thus,we have 3b2 = 48 × 106, or b = 4000 for OverSketchand 2an + a2 = 48 × 106 for [9], where a is the size ofthe row-block of A (and column-block of B). We vary thematrix dimension n from 6b = 24000 to 14b = 56000. ForOverSketch, we take the sketch dimension z to be n/2 + b,and take e = 1, i.e., ignore one straggler per block of C.For straggler mitigation in [9], we add one parity row inA and one parity column in B. In Figures 8a and 8b, wecompare the workers required and average cost in dollars,respectively, for the two schemes. We note that OverSketchrequires asymptotically fewer workers, and it translates tothe cost of doing matrix multiplication. This is becausethe running time at each worker is heavily dependent oncommunication, which is the same for both the schemes.For n = 20000, the average error in Frobenius norm forOverSketch is less than 2%, and decreases as n is increased.
The scheme in [9] requires an additional decoding phase,and assume the existence of a powerful master that can storethe entire product C in memory and decode for the missingblocks using the redundant chunks. This is also true forthe other schemes in [10]–[12]. Moreover, these schemeswould fail when the number of stragglers is more than theprovisioned redundancy while OverSketch has a ’gracefuldegradation’ as one can get away by ignoring more workersthan provisioned at the cost of accuracy of the product.
REFERENCES
[1] G. Ballard, E. Carson, J. Demmel, M. Hoemmen, N. Knight,and O. Schwartz, “Communication lower bounds and optimalalgorithms for numerical linear algebra,” Acta Numerica,vol. 23, pp. 1–155, 2014.
[2] E. Solomonik and J. Demmel, “Communication-optimal paral-lel 2.5D matrix multiplication and LU factorization algorithms,”in Proceedings of the 17th International Conference onParallel Processing, 2011, pp. 90–109.
[3] I. Baldini, P. Castro, K. Chang, P. Cheng, S. Fink, V. Ishakian,N. Mitchell, V. Muthusamy, R. Rabbah, A. Slominski, and
P. Suter, Serverless Computing: Current Trends and OpenProblems. Springer Singapore, 2017.
[4] E. Jonas, Q. Pu, S. Venkataraman, I. Stoica, and B. Recht,“Occupy the cloud: distributed computing for the 99%,” inProceedings of the 2017 Symposium on Cloud Computing.ACM, 2017, pp. 445–451.
[5] V. Shankar, K. Krauth, Q. Pu, E. Jonas, S. Venkataraman,I. Stoica, B. Recht, and J. Ragan-Kelley, “numpywren:serverless linear algebra,” ArXiv e-prints, Oct. 2018.
[6] J. Dean and L. A. Barroso, “The tail at scale,” Commun. ACM,vol. 56, no. 2, pp. 74–80, Feb. 2013.
[7] T. Hoefler, T. Schneider, and A. Lumsdaine, “Characterizingthe influence of system noise on large-scale applications bysimulation,” in Proc. of the ACM/IEEE Int. Conf. for High Perf.Comp., Networking, Storage and Analysis, 2010, pp. 1–11.
[8] K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, andK. Ramchandran, “Speeding up distributed machine learningusing codes,” IEEE Transactions on Information Theory,vol. 64, no. 3, pp. 1514–1529, 2018.
[9] K. Lee, C. Suh, and K. Ramchandran, “High-dimensionalcoded matrix multiplication,” in IEEE Int. Sym. on InformationTheory (ISIT), 2017. IEEE, 2017, pp. 2418–2422.
[10] T. Baharav, K. Lee, O. Ocal, and K. Ramchandran, “Straggler-proofing massive-scale distributed matrix multiplication with d-dimensional product codes,” in IEEE Int. Sym. on InformationTheory (ISIT), 2018. IEEE, 2018.
[11] Q. Yu, M. Maddah-Ali, and S. Avestimehr, “Polynomialcodes: an optimal design for high-dimensional coded matrixmultiplication,” in Adv. in Neural Inf. Processing Systems,2017, pp. 4403–4413.
[12] S. Dutta, M. Fahim, F. Haddadpour, H. Jeong, V. Cadambe,and P. Grover, “On the optimal recovery threshold of codedmatrix multiplication,” arXiv preprint arXiv:1801.10292, 2018.
[13] J. Zhu, Y. Pu, V. Gupta, C. Tomlin, and K. Ramchandran,“A sequential approximation framework for coded distributedoptimization,” in Annual Allerton Conf. on Communication,Control, and Computing, 2017. IEEE, 2017, pp. 1240–1247.
[14] V. Gupta, S. Wang, T. Courtade, and K. Ramchandran, “OverS-ketch: Approximate Matrix Multiplication for the Cloud,”ArXiv e-prints, Nov. 2018.
[15] R. A. van de Geijn and J. Watts, “Summa: scalable universalmatrix multiplication algorithm,” Concurrency - Practice andExperience, vol. 9, pp. 255–274, 1997.
[16] P. Drineas, R. Kannan, and M. W. Mahoney, “Fast montecarlo algorithms for matrices I: Approximating matrix mul-tiplication,” SIAM Journal on Computing, vol. 36, no. 1, pp.132–157, 2006.
[17] D. P. Woodruff, “Sketching as a tool for numerical linearalgebra,” Found. Trends Theor. Comput. Sci., vol. 10, pp. 1–157, 2014.
[18] S. Wang, “A practical guide to randomized matrix compu-tations with MATLAB implementations,” arXiv:1505.07570,2015.
[19] M. Pilanci and M. J. Wainwright, “Newton sketch: A nearlinear-time optimization algorithm with linear-quadratic con-vergence,” SIAM Jour. on Opt., vol. 27, pp. 205–245, 2017.
[20] Y. Yang, M. Pilanci, and M. J. Wainwright, “Randomizedsketches for kernels: Fast and optimal non-parametric regres-sion,” stat, vol. 1050, pp. 1–25, 2015.
304