mapreduce: hadoop implementation. outline mapreduce overview applications of mapreduce hadoop...

21
MapReduce: Hadoop Implementation

Upload: reynold-flynn

Post on 12-Jan-2016

260 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

MapReduce: Hadoop Implementation

Page 2: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Outline

MapReduce overview Applications of MapReduce Hadoop overview

Page 3: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Implicit Parallelism In map

In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements

If order of application of f to elements in list is commutative, we can reorder or parallelize execution

This is the “secret” that MapReduce exploits

Page 4: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Motivation: Large Scale Data Processing Want to process lots of data ( > 1 TB) Want to parallelize across

hundreds/thousands of CPUs … Want to make this easy

Page 5: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

MapReduce

Automatic parallelization & distribution Fault-tolerant Provides status and monitoring tools Clean abstraction for programmers

Page 6: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Programming Model

Borrows from functional programming Users implement interface of two

functions:

map (in_key, in_value) ->

(out_key, intermediate_value) list

reduce (out_key, intermediate_value list) ->

out_value list

Page 7: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

map

Records from the data source (lines out of files, rows of a database, etc) are fed into the map function as key*value pairs: e.g., (filename, line).

map() produces one or more intermediate values along with an output key from the input.

Page 8: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

reduce

After the map phase is over, all the intermediate values for a given output key are combined together into a list

reduce() combines those intermediate values into one or more final values for that same output key

(in practice, usually only one final value per key)

Page 9: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Data store 1 Data store nmap

(key 1, values...)

(key 2, values...)

(key 3, values...)

map

(key 1, values...)

(key 2, values...)

(key 3, values...)

Input key*value pairs

Input key*value pairs

== Barrier == : Aggregates intermediate values by output key

reduce reduce reduce

key 1, intermediate

values

key 2, intermediate

values

key 3, intermediate

values

final key 1 values

final key 2 values

final key 3 values

...

Page 10: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Parallelism

map() functions run in parallel, creating different intermediate values from different input data sets

reduce() functions also run in parallel, each working on a different output key

All values are processed independently Bottleneck: reduce phase can’t start until

map phase is completely finished.

Page 11: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Example: Count word occurrencesmap(String input_key, String input_value):

// input_key: document name

// input_value: document contents

for each word w in input_value:

EmitIntermediate(w, "1");

reduce(String output_key, Iterator intermediate_values):

// output_key: a word

// output_values: a list of counts

int result = 0;

for each v in intermediate_values:

result += ParseInt(v);

Emit(AsString(result));

Page 12: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Optimizations

No reduce can start until map is complete:A single slow disk controller can rate-limit the

whole process Master redundantly executes “slow-

moving” map tasks; uses results of first copy to finish

Why is it safe to redundantly execute map tasks? Wouldn’t this mess up the total computation?

Page 13: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Optimizations

“Combiner” functions can run on same machine as a mapper

Causes a mini-reduce phase to occur before the real reduce phase, to save bandwidth

Under what conditions is it sound to use a combiner?

Page 14: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Distributed Grep: The map function emits a line if it matches a given pattern. The reduce function is an identity function that just copies the supplied intermediate data to the output.

Count of URL Access Frequency: The map function processes logs of web page requests and outputs <URL, 1>. The reduce function adds together all values for the same URL and emits a <URL, total count> pair.

Inverted Index: The map function parses each document, and emits a sequence of <word, document ID> pairs. The reduce function accepts all pairs for a given word, sorts the corresponding document IDs and emits a <word, list(document ID)> pair. The set of all output pairs forms a simple inverted index. It is easy to augment this computation to keep track of word positions

Page 15: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

What is MapReduce used for At Google:

Index construction for Google SearchArticle clustering for Google NewsStatistical machine translation

At Yahoo!:“Web map” powering Yahoo! SearchSpam detection for Yahoo! Mail

At Facebook:Data miningAd optimization

Page 16: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Simple and easy to use. Fault tolerance. Flexible. Independent of the storage.

Disadvantages: no high level language. No schema and no index. A single fixed dataflow. Low efficiency.

Map Reduce Advantages/Disadvantages

Page 17: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Hadoop Apache Hadoop is an open source MapReduce

implementation that has gained significant traction in the last few years in the commercial sector.

Hadoop is an open-source distributed computing platform that implements the MapReduce model.

 Hadoop consists of two core components: the job management framework that handles the map and reduce tasks and the Hadoop Distributed File System (HDFS).

Page 18: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

Hadoop's job management framework is highly reliable and available, using techniques such as replication and automated restart of failed tasks.

HDFS is a highly scalable, fault-tolerant file system modeled after the Google File System. The data locality features of HDFS are used by the Hadoop scheduler to schedule the I/O intensive map computations closer to the data

 HDFS relies on local storage on each node while parallel file systems are typically served from a set of dedicated I/O servers.  

Page 19: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

JobTracker is the daemon service for submitting and tracking MapReduce jobs in Hadoop. There is only One Job Tracker process run on any hadoop cluster.

 The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted.

A TaskTracker is a slave node daemon in the cluster that accepts tasks (Map, Reduce and Shuffle operations) from a JobTracker. There is only One Task Tracker process run on any hadoop slave node. 

Page 20: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

In HDFS Data Blocks are distributed across local drives of all machines in a cluster. Whereas in NAS data is stored on dedicated hardware.

HDFS is designed to work with Map Reduce System, since computation are moved to data. NAS is not suitable for Map Reduce since data is stored separately from the computations.

HDFS runs on a cluster of machines and provides redundancy using a replication protocol. Whereas NAS is provided by a single machine therefore does not provide data redundancy.

  Difference between HDFS and NAS

Page 21: MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview

THANK YOU G.shiva theja