new concept - associative arrays

58
New Concept - Associative Arrays • Can incorporate combiner functionality inside mapper • How to aggregate for word count in Mapper? • Could compute partial count of words processed by Mapper • How do to this? – Include associative array •Tallies up counts within single document •(k,v) for each unique word, not just each (word, 1)

Upload: inga

Post on 23-Feb-2016

42 views

Category:

Documents


0 download

DESCRIPTION

New Concept - Associative Arrays. Can incorporate combiner functionality inside mapper How to aggregate for word count in Mapper? Could compute partial count of words processed by Mapper How do to this? Include associative array Tallies up counts within single document - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: New Concept - Associative Arrays

New Concept - Associative Arrays

• Can incorporate combiner functionality inside mapper

• How to aggregate for word count in Mapper?• Could compute partial count of words processed

by Mapper• How do to this?

– Include associative array• Tallies up counts within single document• (k,v) for each unique word, not just each

(word, 1)

Page 2: New Concept - Associative Arrays
Page 3: New Concept - Associative Arrays

Information Retrieval with MapReduce

Chapter 4

Some slides from Jimmy Lin

Page 4: New Concept - Associative Arrays

Topics

• Introduction to IR• Inverted Indexing• Retrieval

Page 5: New Concept - Associative Arrays

Web search problems• Web search problems decompose into three

components1. Gather web content (crawling)2. Construct inverted index based on web content

gathered (indexing)3. Ranking documents given a query using inverted

index (retrieval)

Page 6: New Concept - Associative Arrays

Web search problems• Crawling and indexing share similar characteristics and

requirements– Both are offline problems, no need for real-time– Tolerable for a few minutes delay before content

searchable– OK to run smaller-scale index updates frequently

• Querying online problem – Demands sub-second response time– Low latency high throughput– Loads can very greatly

Page 7: New Concept - Associative Arrays

Web Crawler

• To acquire the document collection over which indexes are built

• Acquiring web content requires crawling– Traverse web by repeatedly following hyperlinks

and storing downloaded pages– Start by populating a queue with seed pages

Page 8: New Concept - Associative Arrays

Web Crawler Issues

• Shouldn’t overload web servers with crawling• Prioritize order in which unvisited pages downloaded• Avoid downloading page multiple times – coordination

and load balancing• Robust when failures• Learn update patterns so content current• Identify near duplicates and select best for index• Identify dominant language on page

Page 9: New Concept - Associative Arrays

How do we represent text?• Remember: computers don’t “understand” anything!• “Bag of words”

– Treat all the words in a document as index terms for that document

– Assign a “weight” to each term based on “importance”– Disregard order, structure, meaning, etc. of the words– Simple, yet effective!

• Assumptions– Term occurrence is independent– Document relevance is independent– “Words” are well-defined

Page 10: New Concept - Associative Arrays

What’s a word?天主教教宗若望保祿二世因感冒再度住進醫院。這是他今年第二度因同樣的病因住院。 - باسم الناطق ريجيف مارك وقال

قبل - شارون إن اإلسرائيلية الخارجيةبزيارة األولى للمرة وسيقوم الدعوة

المقر طويلة لفترة كانت التي تونس،لبنان من خروجها بعد الفلسطينية التحرير لمنظمة الرسمي

1982عام . Выступая в Мещанском суде Москвы экс-глава ЮКОСа заявил не совершал ничего противозаконного, в чем обвиняет его генпрокуратура России.

भारत सरकार ने आर्थि �क सर्वे�क्षण में विर्वेत्तीय र्वेर्ष� 2005-06 में सात फ़ीसदी विर्वेकास दर हासिसल करने का आकलन विकया है और कर सुधार पर ज़ोर दिदया है

日米連合で台頭中国に対処…アーミテージ前副長官提言

조재영 기자 = 서울시는 25 일 이명박 시장이 ` 행정중심복합도시 '' 건설안에 대해 ` 군대라도 동원해 막고싶은 심정 '' 이라고 말했다는 일부 언론의 보도를 부인했다 .

Page 11: New Concept - Associative Arrays

Sample DocumentMcDonald's slims down spudsFast-food chain to reduce certain types of fat in its french fries with new cooking oil.NEW YORK (CNN/Money) - McDonald's Corp. is cutting the amount of "bad" fat in its french fries nearly in half, the fast-food chain said Tuesday as it moves to make all its fried menu items healthier.But does that mean the popular shoestring fries won't taste the same? The company says no. "It's a win-win for our customers because they are getting the same great french-fry taste along with an even healthier nutrition profile," said Mike Roberts, president of McDonald's USA.But others are not so sure. McDonald's will not specifically discuss the kind of oil it plans to use, but at least one nutrition expert says playing with the formula could mean a different taste.Shares of Oak Brook, Ill.-based McDonald's (MCD: down $0.54 to $23.22, Research, Estimates) were lower Tuesday afternoon. It was unclear Tuesday whether competitors Burger King and Wendy's International (WEN: down $0.80 to $34.91, Research, Estimates) would follow suit. Neither company could immediately be reached for comment.…

16 × said 14 × McDonalds12 × fat11 × fries8 × new6 × company, french, nutrition5 × food, oil, percent, reduce,

taste, Tuesday…

“Bag of Words”

Page 12: New Concept - Associative Arrays

Representing Documents

The quick brown fox jumped over the lazy dog’s back.

Document 1

Document 2

Now is the time for all good men to come to the aid of their party.

the

isfor

to

of

quick

brown

fox

over

lazy

dog

back

now

time

all

good

men

come

jump

aid

their

party

00110110110010100

11001001001101011

Term Docu

men

t 1

Docu

men

t 2

Stopword List

Page 13: New Concept - Associative Arrays

Inverted Index• Inverted indexing is fundamental to all IR models• Consists of postings lists, one with each term in the

collection• Posting list – document id and payload

– Payload can be term frequency or number of times occurs on document, position of occurrence, properties, etc.

– Can be ordered by document id, page rank, etc.– Data structure necessary to map from document

id to for example, URL

Page 14: New Concept - Associative Arrays

Inverted Index

quick

brown

fox

over

lazy

dog

back

now

time

all

good

men

come

jump

aid

their

party

00110000010010110

01001001001100001

Term

Doc

1Do

c 2

00110110110010100

11001001001000001

Doc

3Do

c 4

00010110010010010

01001001000101001

Doc

5Do

c 6

00110010010010010

10001001001111000

Doc

7Do

c 8

quick

brown

fox

over

lazy

dog

back

now

time

all

good

men

come

jump

aid

their

party

4 82 4 61 3 71 3 5 72 4 6 83 53 5 72 4 6 831 3 5 7

1 3 5 7 8

2 4 82 6 8

1 5 72 4 6

1 36 8

Term Postings

Page 15: New Concept - Associative Arrays
Page 16: New Concept - Associative Arrays

Process query - retrieval

• Given a query, fetch posting lists associated with query, traverse postings to compute result set

• Query document scores must be computed• Top k documents extracted• Optimization strategies to reduce # postings

must examine

Page 17: New Concept - Associative Arrays

Index

• Size of index depends on payload• Well-optimized inverted index can be 1/10 of size of

original document collection• If store position info, could be several times larger• Usually can hold entire vocabulary in memory (using

front-coding)• Postings lists usually too large to store in memory• Query evaluation involves random disk access and

decoding postings– Try to minimize random seeks

Page 18: New Concept - Associative Arrays

Indexing: Performance Analysis

• The indexing problem– Must be relatively fast, but need not be real time– For Web, incremental updates are important

• How large is the inverted index?– Size of vocabulary– Size of postings

• Fundamentally, a large sorting problem– Terms usually fit in memory– Postings usually don’t

Page 19: New Concept - Associative Arrays

Vocabulary Size: Heaps’ Law

KnV V is vocabulary sizen is corpus size (number of documents)K and are constants, determineempirically

Typically, K is between 10 and 100, is between 0.4 and 0.6

When adding new documents, the system is likely to have seen most terms already… but the postings keep growing

Heaps' law means that as more instance text is gathered, there will be diminishing returns in terms of discovery of the full vocabulary from which the distinct terms are drawn.

Page 20: New Concept - Associative Arrays

Postings Size: Zipf’s Law• George Kingsley Zipf (1902-1950) observed the following

relation between frequency and rank

• In other words:– A few elements occur very frequently– Many elements occur very infrequently

• Zipfian distributions:– English words– Library book checkout patterns– Website popularity (almost anything on the Web)

crf or

rcf f = frequency

r = rankc = constant

Page 21: New Concept - Associative Arrays

Word Frequency in Englishthe 1130021 from 96900 or 54958of 547311 he 94585 about 53713to 516635 million 93515 market 52110a 464736 year 90104 they 51359in 390819 its 86774 this 50933and 387703 be 85588 would 50828that 204351 was 83398 you 49281for 199340 company 83070 which 48273is 152483 an 76974 bank 47940said 148302 has 74405 stock 47401it 134323 are 74097 trade 47310on 121173 have 73132 his 47116by 118863 but 71887 more 46244as 109135 will 71494 who 42142at 101779 say 66807 one 41635mr 101679 new 64456 their 40910with 101210 share 63925

Frequency of 50 most common words in English (sample of 19 million words)

Page 22: New Concept - Associative Arrays

Question to consider

• How to create an inverted index using map reduce?

Page 23: New Concept - Associative Arrays

MapReduce: Index Construction• Map over all documents – does what?

– What is input? • Docid and content

– What is output?• Emit postings: term as key, (docid, frequency) as value• Emit other information as necessary (e.g., term position)

• Reduce – does what?– What is input?

• term as key, (docid, frequency) as value – What is output?

• Trivial: combines list of postings• Might want to sort the postings (e.g., by docid or term_frequency)

for group by

Page 24: New Concept - Associative Arrays

1: class Mapper2: procedure Map(docid n; doc d)3: H ← new AssociativeArray4: for all term t € doc d do5: H{t} ← H{t} + 16: for all term t € H do7: Emit(term t, posting (n, H{t}))1: class Reducer2: procedure Reduce(term t, postings [(n1, f1), (n2, f2) : : :])3: P ← new List4: for all posting (a, f) € postings [(n1, f1), (n2, f2) : : :] do5: Append(P, (a, f))6: Sort(P)7: Emit(term t; postings P)

Figure 4.2: Pseudo-code of the baseline inverted indexing algorithm in MapReduce. Mappers emit postings keyed by terms, the execution framework groups postings by term, and the reducers write postings lists to disk.

Page 25: New Concept - Associative Arrays

Creating an inverted index – Map

• Input is docids and content• Documents processed in parallel by mappers• Document analyzed and broken down into

component terms• May have to strip off HTML tags, JavaScript code,

remove stop words• Iterate over all terms and keep track of counts• Emit (term, posting) where posting is docid and

payload

Page 26: New Concept - Associative Arrays

Creating Inverted Index - Reduce

• After shuffle sort, Reduce is easy – Just combines individual postings by appending to

initially empty list and writes to disk– Usually compresses list

• How would this be done without MapReduce?

Page 27: New Concept - Associative Arrays

MapReduce

• Map emits term postings• Reduce performs large distributed group by of

postings by term

Page 28: New Concept - Associative Arrays
Page 29: New Concept - Associative Arrays

Retrieval

• Have just showed how to create an index, but what about retrieving info for a query?

• What is involved:– Identify query terms– Look up posting lists corresponding to query terms– Traverse posting lists to compute query-document

scores– Return top k results

Page 30: New Concept - Associative Arrays

Retrieval

• Must have sub-second response• Optimized for low-latency• True of MapReduce?• True of underlying file system?

– Look up postings– Postings too large to fit in memory

• (random disk seeks)– Round-trip network exchanges– Must find location of data block from master, etc.

Page 31: New Concept - Associative Arrays

Solution?

• Distributed Retrieval

Page 32: New Concept - Associative Arrays

Distributed Retrieval

• Distribute retrieval across large number of machines

• Break up index – how?– Document partitioning– Term partitioning• Need Query Broker and Partition Servers

Page 33: New Concept - Associative Arrays

Partitioning

• Document broken up into partitions and assigned to a server– Document partitioning

• Servers hold complete index for a subset of entire collection

• Vertical partitioning– Term partitioning

• Each server responsible for subset of terms for entire collection

• Horizontal partitioning

Page 34: New Concept - Associative Arrays
Page 35: New Concept - Associative Arrays

Document Partitioning• Document Partitioning

– Query broker forwards query to all partition servers– Query broker merges partial results from each– Query broker returns final result to user

• Pros/Cons?– Query must be processed by every server– Each partition operates independently, traverse postings in

parallel• shorter query latencies

• Assuming N terms, K partitions:• Number of disk accesses - Big O?

– O(K*N)

Page 36: New Concept - Associative Arrays

Term Partitioning• Term (word) Partitioning

– Suppose query contains 3 terms, t1, t2 and t3– Uses pipelined strategy – accumulators and route:

• Broker forwards query to server that holds postings for t1– Server traverses postings list, computes partial query scores– Partial scores sent to server with t2, then t3– Then final result passed to query broker

• Pros/Cons– Increases throughput - smaller total # of seeks per query– Better if memory limited system– Load balancing tricky - may create hotspots on servers

• Number of disk accesses - Big O?– O(K)

Page 37: New Concept - Associative Arrays

Which is best?

• Google adopts which partitioning?– Document partitioning– Keeps indexes in memory (not commonly done)– Quality degrades with machine failure– Servers offline won’t deliver results

• User doesn’t know• Users less discriminating as to which relevant

documents appear in results• Even is no failure may cycle through different partitions

Page 38: New Concept - Associative Arrays

Other aspects of Partitioning

• Can partition documents along a dimension, e.g. by quality

• Search highest quality first• Search lower quality if needed• Include another dimension, e.g. content• Only send queries to partitions likely to contain

relevant documents instead of to all partitions

Page 39: New Concept - Associative Arrays

Replication and Partitioning

• Replication?– Within same partition as well as across

geographically distributed data centers– Query routing problems

• Server clients from the closest data centers• Must also route to appropriate locations• If single data center, must balance load across replicas

Page 40: New Concept - Associative Arrays

Displaying to User• Postings only store docid• Get a list of ranked docids• Responsibility of document servers (not

partition servers) to generate meaningful output– Title, URL, snippet of result entry

• Caching useful if index not in memory and can even cache results of entire queries– Why cache results?– Zipfian!!

Page 41: New Concept - Associative Arrays

Summary

1. Documents gathered by web crawling2. Inverted indexes built from documents3. Use indexes to respond to queries

• More info about Google’s approach 2009

Page 42: New Concept - Associative Arrays

• Stopped here

Page 43: New Concept - Associative Arrays

Index Compression

• How are posting compressed and stored on disk?

Page 44: New Concept - Associative Arrays

Boolean Retrieval

• Users express queries as a Boolean expression– AND, OR, NOT– Can be arbitrarily nested

• Retrieval is based on the notion of sets– Any given query divides the collection into two

sets: retrieved, not-retrieved

– Pure Boolean systems do not define an ordering of the results

Page 45: New Concept - Associative Arrays

• Posting list – document id and payload– For simple boolean retrieval, only need document

ID• Payload is not needed

Page 46: New Concept - Associative Arrays

Boolean Retrieval• To execute a Boolean query:

– Build query syntax tree

– For each clause, look up postings

– Traverse postings and apply Boolean operator

• Efficiency analysis– Postings traversal is linear (assuming sorted postings)– Start with shortest posting first

( fox or dog ) and quick

fox dog

ORquick

AND

foxdog 3 5

3 5 7

foxdog 3 5

3 5 7OR = union 3 5 7

Page 47: New Concept - Associative Arrays

Boolean Query - Extensions

• Implementing proximity operators– Store word offset in postings

• Handling term variations– Stem words: love, loving, loves … lov

Page 48: New Concept - Associative Arrays

Boolean Query - Strengths and Weaknesses

• Strengths– Precise, if you know the right strategies– Precise, if you have an idea of what you’re looking for– Implementations are fast and efficient

• Weaknesses– Users must learn Boolean logic– Boolean logic insufficient to capture the richness of language– No control over size of result set: either too many hits or none– When do you stop reading? All documents in the result set are

considered “equally good”– What about partial matches? Documents that “don’t quite

match” the query may be useful also

Page 49: New Concept - Associative Arrays

Query Execution

• MapReduce is meant for large-data batch processing– Not suitable for lots of real time operations

requiring low latency• The solution: “the secret sauce”

– Involves document partitioning– Lots of system engineering: e.g., caching, load

balancing, etc.

Page 50: New Concept - Associative Arrays

MapReduce: Query Execution

• High-throughput batch query execution:– Instead of sequentially accumulating scores per query

term:• Have mappers traverse postings in parallel, emitting partial score

components• Reducers serve as the accumulators, summing contributions for

each query term

• MapReduce does:– Replace random access with sequential reads– Amortize over lots of queries– Examine multiple postings in parallel

Page 51: New Concept - Associative Arrays

Ranked Retrieval• Order documents by how likely they are to be relevant

to the information need– Estimate relevance(q, di)– Sort documents by relevance– Display sorted results

• User model– Present hits one screen at a time, best results first– At any point, users can decide to stop looking

• How do we estimate relevance?– Assume document is relevant if it has a lot of query terms– Replace relevance (q, di) with similarity(q, di)– Compute similarity of vector representations

Page 52: New Concept - Associative Arrays

Vector Space Model

Assumption: Documents that are “close together” in vector space “talk about” the same things

t1

d2

d1

d3

d4

d5

t3

t2

θ

φ

Therefore, retrieve documents based on how close the document is to the query (i.e., similarity ~ “closeness”)

Page 53: New Concept - Associative Arrays

Similarity Metric

• How about |d1 – d2|?• Instead of Euclidean distance, use “angle” between the

vectors– It all boils down to the inner product (dot product) of

vectors

kj

kj

dd

dd

)cos(

n

i kin

i ji

n

i kiji

kj

kjkj

ww

ww

dd

ddddsim

12,1

2,

1 ,,),(

Page 54: New Concept - Associative Arrays

Term Weighting• Term weights consist of two components

– Local: how important is the term in this document?– Global: how important is the term in the collection?

• Here’s the intuition:– Terms that appear often in a document should get

high weights– Terms that appear in many documents should get low

weights (why?)• How do we capture this mathematically?

– Term frequency (local)– Inverse document frequency (global)

Page 55: New Concept - Associative Arrays

TF.IDF Term Weighting

ijiji n

Nw logtf ,,

jiw ,

ji ,tf

N

in

weight assigned to term i in document j

number of occurrence of term i in document j

number of documents in entire collection

number of documents with term i

TF – Term Frequency (local)IDF – Inverse Document Frequeney (global)

Page 56: New Concept - Associative Arrays

TF.IDF Example

4

5

6

3

1

3

1

6

5

3

4

3

7

1

2

1 2 3

2

3

2

4

4

0.301

0.125

0.125

0.125

0.602

0.301

0.000

0.602

tfidf

complicated

contaminated

fallout

information

interesting

nuclear

retrieval

siberia

1,4

1,5

1,6

1,3

2,1

2,1

2,6

3,5

3,3

3,4

1,2

0.301

0.125

0.125

0.125

0.602

0.301

0.000

0.602

complicated

contaminated

fallout

information

interesting

nuclear

retrieval

siberia

4,2

4,3

2,3 3,3 4,2

3,7

3,1 4,4

Page 57: New Concept - Associative Arrays

Sketch: Scoring Algorithm

• Initialize accumulators to hold document scores

• For each query term t in the user’s query– Fetch t’s postings– For each document, scoredoc += wt,f wt,d

• Apply length normalization to the scores at end

• Return top N documents

Page 58: New Concept - Associative Arrays