cs506/606: problem solving with large clusters zak shafran, richard sproat spring 2011 introduction...

Download CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL: zak/cs506-pslc/zak/cs506-pslc

If you can't read please download the document

Upload: lindsay-goodman

Post on 25-Dec-2015

221 views

Category:

Documents


1 download

TRANSCRIPT

  • Slide 1
  • CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL: http://www.csee.ogi.edu/~zak/cs506-pslc/http://www.csee.ogi.edu/~zak/cs506-pslc/
  • Slide 2
  • Purpose of Course This course aims to provide theoretical foundations and practical experience in distributed algorithms. Examples will be drawn from speech and language processing, machine learning, optimization, and graph theory. Though we will make heavy use of MapReduce and Hadoop, this is not a course on Hadoop. Problem Solving with Large Clusters1
  • Slide 3
  • Structure of Course Introductory lectures Reading discussions: Students will take turns presenting papers and will be responsible for up to 2 papers. Homework assignments In-class discussion of assignment solutions by students and in-class laboratory projects. Course project: There will be no final exam. Instead, the course requires a final project of interest to student, chosen in consultation with the instructor. The project requires a written report and a final presentation. Problem Solving with Large Clusters2
  • Slide 4
  • MapReduce How is Condor different from MapReduce? Condor (and qsub, and their kin) is a system for parallelizing serial programs: It makes no assumptions about the input- output behavior of the programs, nor does it directly support combination of the outputs The user decides how to split up the task vis-- vis the input data Problem Solving with Large Clusters3
  • Slide 5
  • MapReduce MapReduce provides a framework whereby data are first processed by multiple instances of a mapper. The system decides how data are assigned to mappers the output of a mapper is a set of key, value pairs, which are then passed to multiple instances of a reducer, which aggregate the results of the mappers Problem Solving with Large Clusters4
  • Slide 6
  • MapReduce Details Note: Unless otherwise noted, all figures are from Jimmy Lin & Chris Dyer, Data Intensive Text Processing with MapReduce. 2010, Morgan & Claypool Problem Solving with Large Clusters5
  • Slide 7
  • Working Assumptions Assume failures are common Move processing to the data Process data sequentially avoid random access Hide system-level details from the application developer Seamless scalability Problem Solving with Large Clusters6
  • Slide 8
  • Functional Programming: Map and Fold Problem Solving with Large Clusters7
  • Slide 9
  • Functional Programming in Lisp map and fold: >(defun square (n) (* n n)) SQUARE >(defun sum (n1 n2) (+ n1 n2)) SUM >(reduce 'sum (map 'list 'square '(1 2 3))) 14 Problem Solving with Large Clusters8
  • Slide 10
  • MapReduce Mapper and reducer have the signatures: Mappers emit key-value pairs in parallel Output of mappers is shuffled and sorted by keys Tuples with same keys are passed to the same reducer Reducers output lists of key-value pairs Problem Solving with Large Clusters9
  • Slide 11
  • Simplified View of Map Reduce Problem Solving with Large Clusters10
  • Slide 12
  • Simple Word Counter Problem Solving with Large Clusters11
  • Slide 13
  • Partitioners and Combiners Partitioners divide up the intermediate key space and assign keys to reducers This is commonly done by hashing the key and assigning modulo the number of reducers For many tasks some reducers may end up getting much more work than others. Why? Combiners are a further optimization that allow for local aggregation before shuffle/sort Problem Solving with Large Clusters12
  • Slide 14
  • Fuller View of Map Reduce Problem Solving with Large Clusters13
  • Slide 15
  • Important Points key-value pairs with the same key will be sent to the same reducer, but no guarantee which reducer will be assigned which key combiners must accept and emit data in the same format as the output of the mapper there is no guarantee how many times a combiner will run, if at all Problem Solving with Large Clusters14
  • Slide 16
  • Programmer has little control over: Where a mapper or reducer runs (i.e., on which node in the cluster). When a mapper or reducer begins or finishes. Which input key-value pairs are processed by a specific mapper. Which intermediate key-value pairs are processed by a specific reducer. (Lin & Dyer, p. 37) Problem Solving with Large Clusters15
  • Slide 17
  • Programmer can control: The ability to construct complex data structures as keys and values to store and communicate partial results. The ability to execute user-specified initialization code at the beginning of a map or reduce task, and the ability to execute user- specified termination code at the end of a map or reduce task. The ability to preserve state in both mappers and reducers across multiple input or intermediate keys. The ability to control the sort order of intermediate keys, and therefore the order in which a reducer will encounter particular keys. The ability to control the partitioning of the key space, and therefore the set of keys that will be encountered by a particular reducer. (Lin & Dyer, p. 38) Problem Solving with Large Clusters16
  • Slide 18
  • Word Counting Again Problem Solving with Large Clusters17 Problem: each word encountered in the collection gets passed across the network to the reducers
  • Slide 19
  • Mapper-side Aggregation Problem Solving with Large Clusters18
  • Slide 20
  • Mapper-side aggregation across documents Problem Solving with Large Clusters19
  • Slide 21
  • Issues with Mapper-side aggregation Behavior may depend on the order in which key-value pairs are encountered There is a scalability bottleneck: one must have enough memory for the data- structures that store the counts Heaps law predicts that vocabularies never stop growing Common work-arounds include flushing data when the structures grow too large Problem Solving with Large Clusters20
  • Slide 22
  • Example with Combiners Problem Solving with Large Clusters21
  • Slide 23
  • Combiner Implementation: First Version Problem Solving with Large Clusters22
  • Slide 24
  • Combiner Implementation: Correct Version Problem Solving with Large Clusters23
  • Slide 25
  • In-Mapper Combining Problem Solving with Large Clusters24
  • Slide 26
  • Word co-occurrences: Pairs Problem Solving with Large Clusters25
  • Slide 27
  • Word-cooccurrences: Stripes Problem Solving with Large Clusters26
  • Slide 28
  • Efficiency Issues Problem Solving with Large Clusters27
  • Slide 29
  • Efficiency Issues Problem Solving with Large Clusters28
  • Slide 30
  • Relative Frequencies Advantage of stripes approach: Counts of all words cooccurring with each target word are in the stripes Special partitioner needed for pairs approach: Must ensure that all of the (w, x) get sent to the same reducer Problem Solving with Large Clusters29
  • Slide 31
  • The (w, *) key: order inversion Problem Solving with Large Clusters30 Insight: convert computation sequence into a sorting problem
  • Slide 32
  • Secondary Sorting Googles M-R allows for a secondary sort on values; Hadoop doesnt Sensor data: Emit sensor+time value and a custom partitioner: Problem Solving with Large Clusters31
  • Slide 33
  • Relational Joins Two relations, S, T: Problem Solving with Large Clusters32
  • Slide 34
  • Reduce-side Join One-to-one join: One-to-many join, do sort and partition before passing to reducer: Problem Solving with Large Clusters33
  • Slide 35
  • Reduce-side Join Many-to-many join Basic insight: repartition the join key. Inefficient since requires shuffling both datasets across the network (Lin & Dyer, p. 62) Problem Solving with Large Clusters34
  • Slide 36
  • Map-side Join Map over one of the datasets (the larger one) and inside the mapper read the corresponding part of the other dataset to perform the merge join (Lin & Dyer, p. 62) No reducer needed Problem Solving with Large Clusters35
  • Slide 37
  • Inverted Indexing Terms associated with a list of documents and payloads information about occurrences of the term in the document Problem Solving with Large Clusters36
  • Slide 38
  • Inverted Indexing Problem Solving with Large Clusters37
  • Slide 39
  • Illustration of Baseline Algorithm Problem Solving with Large Clusters38
  • Slide 40
  • Problems with Baseline The baseline algorithm assumes all postings associated with the same term can be held in memory This is not going to work for large sets of documents (e.g. the Web) Instead of emitting we instead emit: This requires a custom partitioner to ensure that each term gets sent to the same reducer Problem Solving with Large Clusters39
  • Slide 41
  • Scalable Inverted Indexer Problem Solving with Large Clusters40
  • Slide 42
  • Index Compression Nave representation: [(5, 2), (7, 3), (12, 1), (49, 1), (51, 2),...] First trick: encode differences [(5, 2), (2, 3), (5, 1), (37, 1), (2, 2),...] d-gaps could be as large as |D|-1 Need a method that encodes smaller numbers with less space Problem Solving with Large Clusters41
  • Slide 43
  • Golomb and codes Problem Solving with Large Clusters42 Length in unary Remainder in binary
  • Slide 44
  • Golomb Codes Problem Solving with Large Clusters43 (Lin & Dyer, p. 78)
  • Slide 45
  • Index Encoding D-gaps use Golomb compression: Term frequencies are encoded with codes Problem Solving with Large Clusters44
  • Slide 46
  • Retrieval MapReduce is a poor solution to retrieval: Retrieval depends upon random access, exactly the opposite of the serial access model assumed for MapReduce Two approaches: Term partitioning: Each server is responsible for a subset of the terms Document partitioning: Each server is responsible for a subset of the documents Problem Solving with Large Clusters45
  • Slide 47
  • Term vs. Document Partitioning Problem Solving with Large Clusters46
  • Slide 48
  • Term vs. Document Partitioning Document partitioning requires a query broker Term partitioning: for a query containing 3 terms q1, q2, q3, the broker forwards query to the server that holds the postings for q1. Server traverses appropriate postings list and computes partial querydocument scores, stored in the accumulators. The accumulators are passed to the server that holds the postings associated with q2 for additional processing, etc (Lin & Dyer p. 81) Google uses document partitioning Problem Solving with Large Clusters47
  • Slide 49
  • Hadoop Hadoop Distributed File System (HDFS) Master-Slave relationship: Namenode (master) manages metadata, directory structure, file- to-block mapping, block location, permissions Datanode (slave) manage actual data blocks Client contacts namenode to get pointer to block id and datanode Client then contacts datanode Multiple copies (typically 3) of data are stored Strong advantage to having a few big files rather than lots of little files: More efficient use of namenode memory One mapper per file, so lots of little files means lots of mappers A lot of across-the-network copies during shuffle/sort phase Problem Solving with Large Clusters48
  • Slide 50
  • Hadoop Distributed File System (HDFS) Problem Solving with Large Clusters49
  • Slide 51
  • Hadoop Architecture Problem Solving with Large Clusters50
  • Slide 52
  • MapReduce Art Problem Solving with Large Clusters51
  • Slide 53
  • Reading Assignments Lin & Dyer, chs. 1-4 White, chs. 1-3 Problem Solving with Large Clusters52